Sample records for common neural network

  1. Optimization of neural network architecture using genetic programming improves detection and modeling of gene-gene interactions in studies of human diseases

    PubMed Central

    Ritchie, Marylyn D; White, Bill C; Parker, Joel S; Hahn, Lance W; Moore, Jason H

    2003-01-01

    Background Appropriate definition of neural network architecture prior to data analysis is crucial for successful data mining. This can be challenging when the underlying model of the data is unknown. The goal of this study was to determine whether optimizing neural network architecture using genetic programming as a machine learning strategy would improve the ability of neural networks to model and detect nonlinear interactions among genes in studies of common human diseases. Results Using simulated data, we show that a genetic programming optimized neural network approach is able to model gene-gene interactions as well as a traditional back propagation neural network. Furthermore, the genetic programming optimized neural network is better than the traditional back propagation neural network approach in terms of predictive ability and power to detect gene-gene interactions when non-functional polymorphisms are present. Conclusion This study suggests that a machine learning strategy for optimizing neural network architecture may be preferable to traditional trial-and-error approaches for the identification and characterization of gene-gene interactions in common, complex human diseases. PMID:12846935

  2. Region stability analysis and tracking control of memristive recurrent neural network.

    PubMed

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Spatio-Temporal Neural Networks for Vision, Reasoning and Rapid Decision Making

    DTIC Science & Technology

    1994-08-31

    something that is obviously not pattern for long-term knowledge base (LTKB) facts. As a matter possiblc in common neural networks (as units in a...Conferences on Neural Davis, P. (19W0) Application of op~tical chaos to temporal pattern search in a Networks . Piscataway, NJ. [SC] nonlinear optical...Science Institute PROJECT TITLE: Spatio-temporal Neural Networks for Vision, Reasoning and Rapid Decision Making (N00014-93-1-1149) Number of ONR

  4. Forecasting PM10 in metropolitan areas: Efficacy of neural networks.

    PubMed

    Fernando, H J S; Mammarella, M C; Grandoni, G; Fedele, P; Di Marco, R; Dimitrova, R; Hyde, P

    2012-04-01

    Deterministic photochemical air quality models are commonly used for regulatory management and planning of urban airsheds. These models are complex, computer intensive, and hence are prohibitively expensive for routine air quality predictions. Stochastic methods are becoming increasingly popular as an alternative, which relegate decision making to artificial intelligence based on Neural Networks that are made of artificial neurons or 'nodes' capable of 'learning through training' via historic data. A Neural Network was used to predict particulate matter concentration at a regulatory monitoring site in Phoenix, Arizona; its development, efficacy as a predictive tool and performance vis-à-vis a commonly used regulatory photochemical model are described in this paper. It is concluded that Neural Networks are much easier, quicker and economical to implement without compromising the accuracy of predictions. Neural Networks can be used to develop rapid air quality warning systems based on a network of automated monitoring stations. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. ANALYSIS OF CLINICAL AND DERMOSCOPIC FEATURES FOR BASAL CELL CARCINOMA NEURAL NETWORK CLASSIFICATION

    PubMed Central

    Cheng, Beibei; Stanley, R. Joe; Stoecker, William V; Stricklin, Sherea M.; Hinton, Kristen A.; Nguyen, Thanh K.; Rader, Ryan K.; Rabinovitz, Harold S.; Oliviero, Margaret; Moss, Randy H.

    2012-01-01

    Background Basal cell carcinoma (BCC) is the most commonly diagnosed cancer in the United States. In this research, we examine four different feature categories used for diagnostic decisions, including patient personal profile (patient age, gender, etc.), general exam (lesion size and location), common dermoscopic (blue-gray ovoids, leaf-structure dirt trails, etc.), and specific dermoscopic lesion (white/pink areas, semitranslucency, etc.). Specific dermoscopic features are more restricted versions of the common dermoscopic features. Methods Combinations of the four feature categories are analyzed over a data set of 700 lesions, with 350 BCCs and 350 benign lesions, for lesion discrimination using neural network-based techniques, including Evolving Artificial Neural Networks and Evolving Artificial Neural Network Ensembles. Results Experiment results based on ten-fold cross validation for training and testing the different neural network-based techniques yielded an area under the receiver operating characteristic curve as high as 0.981 when all features were combined. The common dermoscopic lesion features generally yielded higher discrimination results than other individual feature categories. Conclusions Experimental results show that combining clinical and image information provides enhanced lesion discrimination capability over either information source separately. This research highlights the potential of data fusion as a model for the diagnostic process. PMID:22724561

  6. Experiments in Neural-Network Control of a Free-Flying Space Robot

    NASA Technical Reports Server (NTRS)

    Wilson, Edward

    1995-01-01

    Four important generic issues are identified and addressed in some depth in this thesis as part of the development of an adaptive neural network based control system for an experimental free flying space robot prototype. The first issue concerns the importance of true system level design of the control system. A new hybrid strategy is developed here, in depth, for the beneficial integration of neural networks into the total control system. A second important issue in neural network control concerns incorporating a priori knowledge into the neural network. In many applications, it is possible to get a reasonably accurate controller using conventional means. If this prior information is used purposefully to provide a starting point for the optimizing capabilities of the neural network, it can provide much faster initial learning. In a step towards addressing this issue, a new generic Fully Connected Architecture (FCA) is developed for use with backpropagation. A third issue is that neural networks are commonly trained using a gradient based optimization method such as backpropagation; but many real world systems have Discrete Valued Functions (DVFs) that do not permit gradient based optimization. One example is the on-off thrusters that are common on spacecraft. A new technique is developed here that now extends backpropagation learning for use with DVFs. The fourth issue is that the speed of adaptation is often a limiting factor in the implementation of a neural network control system. This issue has been strongly resolved in the research by drawing on the above new contributions.

  7. Visual NNet: An Educational ANN's Simulation Environment Reusing Matlab Neural Networks Toolbox

    ERIC Educational Resources Information Center

    Garcia-Roselló, Emilio; González-Dacosta, Jacinto; Lado, Maria J.; Méndez, Arturo J.; Garcia Pérez-Schofield, Baltasar; Ferrer, Fátima

    2011-01-01

    Artificial Neural Networks (ANN's) are nowadays a common subject in different curricula of graduate and postgraduate studies. Due to the complex algorithms involved and the dynamic nature of ANN's, simulation software has been commonly used to teach this subject. This software has usually been developed specifically for learning purposes, because…

  8. Neural networks for link prediction in realistic biomedical graphs: a multi-dimensional evaluation of graph embedding-based approaches.

    PubMed

    Crichton, Gamal; Guo, Yufan; Pyysalo, Sampo; Korhonen, Anna

    2018-05-21

    Link prediction in biomedical graphs has several important applications including predicting Drug-Target Interactions (DTI), Protein-Protein Interaction (PPI) prediction and Literature-Based Discovery (LBD). It can be done using a classifier to output the probability of link formation between nodes. Recently several works have used neural networks to create node representations which allow rich inputs to neural classifiers. Preliminary works were done on this and report promising results. However they did not use realistic settings like time-slicing, evaluate performances with comprehensive metrics or explain when or why neural network methods outperform. We investigated how inputs from four node representation algorithms affect performance of a neural link predictor on random- and time-sliced biomedical graphs of real-world sizes (∼ 6 million edges) containing information relevant to DTI, PPI and LBD. We compared the performance of the neural link predictor to those of established baselines and report performance across five metrics. In random- and time-sliced experiments when the neural network methods were able to learn good node representations and there was a negligible amount of disconnected nodes, those approaches outperformed the baselines. In the smallest graph (∼ 15,000 edges) and in larger graphs with approximately 14% disconnected nodes, baselines such as Common Neighbours proved a justifiable choice for link prediction. At low recall levels (∼ 0.3) the approaches were mostly equal, but at higher recall levels across all nodes and average performance at individual nodes, neural network approaches were superior. Analysis showed that neural network methods performed well on links between nodes with no previous common neighbours; potentially the most interesting links. Additionally, while neural network methods benefit from large amounts of data, they require considerable amounts of computational resources to utilise them. Our results indicate that when there is enough data for the neural network methods to use and there are a negligible amount of disconnected nodes, those approaches outperform the baselines. At low recall levels the approaches are mostly equal but at higher recall levels and average performance at individual nodes, neural network approaches are superior. Performance at nodes without common neighbours which indicate more unexpected and perhaps more useful links account for this.

  9. Neural computation of arithmetic functions

    NASA Technical Reports Server (NTRS)

    Siu, Kai-Yeung; Bruck, Jehoshua

    1990-01-01

    An area of application of neural networks is considered. A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n-bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions.

  10. The C. elegans Connectome Consists of Homogenous Circuits with Defined Functional Roles

    PubMed Central

    Azulay, Aharon; Zaslaver, Alon

    2016-01-01

    A major goal of systems neuroscience is to decipher the structure-function relationship in neural networks. Here we study network functionality in light of the common-neighbor-rule (CNR) in which a pair of neurons is more likely to be connected the more common neighbors it shares. Focusing on the fully-mapped neural network of C. elegans worms, we establish that the CNR is an emerging property in this connectome. Moreover, sets of common neighbors form homogenous structures that appear in defined layers of the network. Simulations of signal propagation reveal their potential functional roles: signal amplification and short-term memory at the sensory/inter-neuron layer, and synchronized activity at the motoneuron layer supporting coordinated movement. A coarse-grained view of the neural network based on homogenous connected sets alone reveals a simple modular network architecture that is intuitive to understand. These findings provide a novel framework for analyzing larger, more complex, connectomes once these become available. PMID:27606684

  11. Neural-network-directed alignment of optical systems using the laser-beam spatial filter as an example

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Krasowski, Michael J.; Weiland, Kenneth E.

    1993-01-01

    This report describes an effort at NASA Lewis Research Center to use artificial neural networks to automate the alignment and control of optical measurement systems. Specifically, it addresses the use of commercially available neural network software and hardware to direct alignments of the common laser-beam-smoothing spatial filter. The report presents a general approach for designing alignment records and combining these into training sets to teach optical alignment functions to neural networks and discusses the use of these training sets to train several types of neural networks. Neural network configurations used include the adaptive resonance network, the back-propagation-trained network, and the counter-propagation network. This work shows that neural networks can be used to produce robust sequencers. These sequencers can learn by example to execute the step-by-step procedures of optical alignment and also can learn adaptively to correct for environmentally induced misalignment. The long-range objective is to use neural networks to automate the alignment and operation of optical measurement systems in remote, harsh, or dangerous aerospace environments. This work also shows that when neural networks are trained by a human operator, training sets should be recorded, training should be executed, and testing should be done in a manner that does not depend on intellectual judgments of the human operator.

  12. Correlation Filter Synthesis Using Neural Networks.

    DTIC Science & Technology

    1993-12-01

    trained neural networks may be understood as "smart" data interpolators, the stored filter and the filter synthesis approaches have much in common: in...the former new filters are found by searching a data bank consisting of the filters themselves; in the latter filters are formed from a distributed... data bank that contains neural network interaction strengths or weights. 1.2 Key Results and Outputs Excellent computer simulation results were

  13. Disorder generated by interacting neural networks: application to econophysics and cryptography

    NASA Astrophysics Data System (ADS)

    Kinzel, Wolfgang; Kanter, Ido

    2003-10-01

    When neural networks are trained on their own output signals they generate disordered time series. In particular, when two neural networks are trained on their mutual output they can synchronize; they relax to a time-dependent state with identical synaptic weights. Two applications of this phenomenon are discussed for (a) econophysics and (b) cryptography. (a) When agents competing in a closed market (minority game) are using neural networks to make their decisions, the total system relaxes to a state of good performance. (b) Two partners communicating over a public channel can find a common secret key.

  14. Pruning artificial neural networks using neural complexity measures.

    PubMed

    Jorgensen, Thomas D; Haynes, Barry P; Norlund, Charlotte C F

    2008-10-01

    This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.

  15. Application of artificial neural networks to gaming

    NASA Astrophysics Data System (ADS)

    Baba, Norio; Kita, Tomio; Oda, Kazuhiro

    1995-04-01

    Recently, neural network technology has been applied to various actual problems. It has succeeded in producing a large number of intelligent systems. In this article, we suggest that it could be applied to the field of gaming. In particular, we suggest that the neural network model could be used to mimic players' characters. Several computer simulation results using a computer gaming system which is a modified version of the COMMONS GAME confirm our idea.

  16. Shared molecular networks in orofacial and neural tube development.

    PubMed

    Kousa, Youssef A; Mansour, Tamer A; Seada, Haitham; Matoo, Samaneh; Schutte, Brian C

    2017-01-30

    Single genetic variants can affect multiple tissues during development. Thus it is possible that disruption of shared gene regulatory networks might underlie syndromic presentations. In this study, we explore this idea through examination of two critical developmental programs that control orofacial and neural tube development and identify shared regulatory factors and networks. Identification of these networks has the potential to yield additional candidate genes for poorly understood developmental disorders and assist in modeling and perhaps managing risk factors to prevent morbidly and mortality. We reviewed the literature to identify genes common between orofacial and neural tube defects and development. We then conducted a bioinformatic analysis to identify shared molecular targets and pathways in the development of these tissues. Finally, we examine publicly available RNA-Seq data to identify which of these genes are expressed in both tissues during development. We identify common regulatory factors in orofacial and neural tube development. Pathway enrichment analysis shows that folate, cancer and hedgehog signaling pathways are shared in neural tube and orofacial development. Developing neural tissues differentially express mouse exencephaly and cleft palate genes, whereas developing orofacial tissues were enriched for both clefting and neural tube defect genes. These data suggest that key developmental factors and pathways are shared between orofacial and neural tube defects. We conclude that it might be most beneficial to focus on common regulatory factors and pathways to better understand pathology and develop preventative measures for these birth defects. Birth Defects Research 109:169-179, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Ground states of partially connected binary neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1990-01-01

    Neural networks defined by outer products of vectors over (-1, 0, 1) are considered. Patterns over (-1, 0, 1) define by their outer products partially connected neural networks consisting of internally strongly connected, externally weakly connected subnetworks. Subpatterns over (-1, 1) define subnetworks, and their combinations that agree in the common bits define permissible words. It is shown that the permissible words are locally stable states of the network, provided that each of the subnetworks stores mutually orthogonal subwords, or, at most, two subwords. It is also shown that when each of the subnetworks stores two mutually orthogonal binary subwords at most, the permissible words, defined as the combinations of the subwords (one corresponding to each subnetwork), that agree in their common bits are the unique ground states of the associated energy function.

  18. Predicate calculus for an architecture of multiple neural networks

    NASA Astrophysics Data System (ADS)

    Consoli, Robert H.

    1990-08-01

    Future projects with neural networks will require multiple individual network components. Current efforts along these lines are ad hoc. This paper relates the neural network to a classical device and derives a multi-part architecture from that model. Further it provides a Predicate Calculus variant for describing the location and nature of the trainings and suggests Resolution Refutation as a method for determining the performance of the system as well as the location of needed trainings for specific proofs. 2. THE NEURAL NETWORK AND A CLASSICAL DEVICE Recently investigators have been making reports about architectures of multiple neural networksL234. These efforts are appearing at an early stage in neural network investigations they are characterized by architectures suggested directly by the problem space. Touretzky and Hinton suggest an architecture for processing logical statements1 the design of this architecture arises from the syntax of a restricted class of logical expressions and exhibits syntactic limitations. In similar fashion a multiple neural netword arises out of a control problem2 from the sequence learning problem3 and from the domain of machine learning. 4 But a general theory of multiple neural devices is missing. More general attempts to relate single or multiple neural networks to classical computing devices are not common although an attempt is made to relate single neural devices to a Turing machines and Sun et a!. develop a multiple neural architecture that performs pattern classification.

  19. Frequency Domain Analysis of Narx Neural Networks

    NASA Astrophysics Data System (ADS)

    Chance, J. E.; Worden, K.; Tomlinson, G. R.

    1998-06-01

    A method is proposed for interpreting the behaviour of NARX neural networks. The correspondence between time-delay neural networks and Volterra series is extended to the NARX class of networks. The Volterra kernels, or rather, their Fourier transforms, are obtained via harmonic probing. In the same way that the Volterra kernels generalize the impulse response to non-linear systems, the Volterra kernel transforms can be viewed as higher-order analogues of the Frequency Response Functions commonly used in Engineering dynamics; they can be interpreted in much the same way.

  20. On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks

    PubMed Central

    Tonelli, Paul; Mouret, Jean-Baptiste

    2013-01-01

    A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities. PMID:24236099

  1. Hearing, feeling or seeing a beat recruits a supramodal network in the auditory dorsal stream.

    PubMed

    Araneda, Rodrigo; Renier, Laurent; Ebner-Karestinos, Daniela; Dricot, Laurence; De Volder, Anne G

    2017-06-01

    Hearing a beat recruits a wide neural network that involves the auditory cortex and motor planning regions. Perceiving a beat can potentially be achieved via vision or even touch, but it is currently not clear whether a common neural network underlies beat processing. Here, we used functional magnetic resonance imaging (fMRI) to test to what extent the neural network involved in beat processing is supramodal, that is, is the same in the different sensory modalities. Brain activity changes in 27 healthy volunteers were monitored while they were attending to the same rhythmic sequences (with and without a beat) in audition, vision and the vibrotactile modality. We found a common neural network for beat detection in the three modalities that involved parts of the auditory dorsal pathway. Within this network, only the putamen and the supplementary motor area (SMA) showed specificity to the beat, while the brain activity in the putamen covariated with the beat detection speed. These results highlighted the implication of the auditory dorsal stream in beat detection, confirmed the important role played by the putamen in beat detection and indicated that the neural network for beat detection is mostly supramodal. This constitutes a new example of convergence of the same functional attributes into one centralized representation in the brain. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    PubMed

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  3. Artificial and Bayesian Neural Networks

    PubMed

    Korhani Kangi, Azam; Bahrampour, Abbas

    2018-02-26

    Introduction and purpose: In recent years the use of neural networks without any premises for investigation of prognosis in analyzing survival data has increased. Artificial neural networks (ANN) use small processors with a continuous network to solve problems inspired by the human brain. Bayesian neural networks (BNN) constitute a neural-based approach to modeling and non-linearization of complex issues using special algorithms and statistical methods. Gastric cancer incidence is the first and third ranking for men and women in Iran, respectively. The aim of the present study was to assess the value of an artificial neural network and a Bayesian neural network for modeling and predicting of probability of gastric cancer patient death. Materials and Methods: In this study, we used information on 339 patients aged from 20 to 90 years old with positive gastric cancer, referred to Afzalipoor and Shahid Bahonar Hospitals in Kerman City from 2001 to 2015. The three layers perceptron neural network (ANN) and the Bayesian neural network (BNN) were used for predicting the probability of mortality using the available data. To investigate differences between the models, sensitivity, specificity, accuracy and the area under receiver operating characteristic curves (AUROCs) were generated. Results: In this study, the sensitivity and specificity of the artificial neural network and Bayesian neural network models were 0.882, 0.903 and 0.954, 0.909, respectively. Prediction accuracy and the area under curve ROC for the two models were 0.891, 0.944 and 0.935, 0.961. The age at diagnosis of gastric cancer was most important for predicting survival, followed by tumor grade, morphology, gender, smoking history, opium consumption, receiving chemotherapy, presence of metastasis, tumor stage, receiving radiotherapy, and being resident in a village. Conclusion: The findings of the present study indicated that the Bayesian neural network is preferable to an artificial neural network for predicting survival of gastric cancer patients in Iran. Creative Commons Attribution License

  4. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  5. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting

    PubMed Central

    Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927

  6. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    PubMed

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  7. Supervised Learning in CINets

    DTIC Science & Technology

    2011-07-01

    supervised learning process is compared to that of Artificial Neural Network ( ANNs ), fuzzy logic rule set, and Bayesian network approaches...of both fuzzy logic systems and Artificial Neural Networks ( ANNs ). Like fuzzy logic systems, the CINet technique allows the use of human- intuitive...fuzzy rule systems [3] CINets also maintain features common to both fuzzy systems and ANNs . The technique can be be shown to possess the property

  8. Neural network-based run-to-run controller using exposure and resist thickness adjustment

    NASA Astrophysics Data System (ADS)

    Geary, Shane; Barry, Ronan

    2003-06-01

    This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.

  9. [The Identification of the Origin of Chinese Wolfberry Based on Infrared Spectral Technology and the Artificial Neural Network].

    PubMed

    Li, Zhong; Liu, Ming-de; Ji, Shou-xiang

    2016-03-01

    The Fourier Transform Infrared Spectroscopy (FTIR) is established to find the geographic origins of Chinese wolfberry quickly. In the paper, the 45 samples of Chinese wolfberry from different places of Qinghai Province are to be surveyed by FTIR. The original data matrix of FTIR is pretreated with common preprocessing and wavelet transform. Compared with common windows shifting smoothing preprocessing, standard normal variation correction and multiplicative scatter correction, wavelet transform is an effective spectrum data preprocessing method. Before establishing model through the artificial neural networks, the spectra variables are compressed by means of the wavelet transformation so as to enhance the training speed of the artificial neural networks, and at the same time the related parameters of the artificial neural networks model are also discussed in detail. The survey shows even if the infrared spectroscopy data is compressed to 1/8 of its original data, the spectral information and analytical accuracy are not deteriorated. The compressed spectra variables are used for modeling parameters of the backpropagation artificial neural network (BP-ANN) model and the geographic origins of Chinese wolfberry are used for parameters of export. Three layers of neural network model are built to predict the 10 unknown samples by using the MATLAB neural network toolbox design error back propagation network. The number of hidden layer neurons is 5, and the number of output layer neuron is 1. The transfer function of hidden layer is tansig, while the transfer function of output layer is purelin. Network training function is trainl and the learning function of weights and thresholds is learngdm. net. trainParam. epochs=1 000, while net. trainParam. goal = 0.001. The recognition rate of 100% is to be achieved. It can be concluded that the method is quite suitable for the quick discrimination of producing areas of Chinese wolfberry. The infrared spectral analysis technology combined with the artificial neural networks is proved to be a reliable and new method for the identification of the original place of Traditional Chinese Medicine.

  10. Neural learning of constrained nonlinear transformations

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob; Gulati, Sandeep; Zak, Michail

    1989-01-01

    Two issues that are fundamental to developing autonomous intelligent robots, namely, rudimentary learning capability and dexterous manipulation, are examined. A powerful neural learning formalism is introduced for addressing a large class of nonlinear mapping problems, including redundant manipulator inverse kinematics, commonly encountered during the design of real-time adaptive control mechanisms. Artificial neural networks with terminal attractor dynamics are used. The rapid network convergence resulting from the infinite local stability of these attractors allows the development of fast neural learning algorithms. Approaches to manipulator inverse kinematics are reviewed, the neurodynamics model is discussed, and the neural learning algorithm is presented.

  11. Sequential and parallel image restoration: neural network implementations.

    PubMed

    Figueiredo, M T; Leitao, J N

    1994-01-01

    Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.

  12. fMRI Evidence for Strategic Decision-Making during Resolution of Pronoun Reference

    ERIC Educational Resources Information Center

    McMillan, Corey T.; Clark, Robin; Gunawardena, Delani; Ryant, Neville; Grossman, Murray

    2012-01-01

    Pronouns are extraordinarily common in daily language yet little is known about the neural mechanisms that support decisions about pronoun reference. We propose a large-scale neural network for resolving pronoun reference that consists of two components. First, a core language network in peri-Sylvian cortex supports syntactic and semantic…

  13. Neural-like growing networks

    NASA Astrophysics Data System (ADS)

    Yashchenko, Vitaliy A.

    2000-03-01

    On the basis of the analysis of scientific ideas reflecting the law in the structure and functioning the biological structures of a brain, and analysis and synthesis of knowledge, developed by various directions in Computer Science, also there were developed the bases of the theory of a new class neural-like growing networks, not having the analogue in world practice. In a base of neural-like growing networks the synthesis of knowledge developed by classical theories - semantic and neural of networks is. The first of them enable to form sense, as objects and connections between them in accordance with construction of the network. With thus each sense gets a separate a component of a network as top, connected to other tops. In common it quite corresponds to structure reflected in a brain, where each obvious concept is presented by certain structure and has designating symbol. Secondly, this network gets increased semantic clearness at the expense owing to formation not only connections between neural by elements, but also themselves of elements as such, i.e. here has a place not simply construction of a network by accommodation sense structures in environment neural of elements, and purely creation of most this environment, as of an equivalent of environment of memory. Thus neural-like growing networks are represented by the convenient apparatus for modeling of mechanisms of teleological thinking, as a fulfillment of certain psychophysiological of functions.

  14. Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.

    PubMed

    Zhang, Yanjun; Tao, Gang; Chen, Mou

    2016-09-01

    This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method.

  15. Proceedings of the Government Neural Network Applications Workshop Held at Wright-Patterson AFB, Ohio on August 24-26, 1992. Volume 1

    DTIC Science & Technology

    1992-08-01

    history trace of input u(t). (b) A common network struc- 1 ture makes use of the feedforward tapped delay line. For this structure the memory depth D...theories and analyses that will be used world- wide for a long time to come. The reason for this contribution has generally been the government’s need to...that emulate the neural reasoning behavior of biological neural systems (e.g. the human brain). As such, they are loosely based on biological neural

  16. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Gradient calculations for dynamic recurrent neural networks: a survey.

    PubMed

    Pearlmutter, B A

    1995-01-01

    Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some "tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed.

  18. Hadoop neural network for parallel and distributed feature selection.

    PubMed

    Hodge, Victoria J; O'Keefe, Simon; Austin, Jim

    2016-06-01

    In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Neural network application to comprehensive engine diagnostics

    NASA Technical Reports Server (NTRS)

    Marko, Kenneth A.

    1994-01-01

    We have previously reported on the use of neural networks for detection and identification of faults in complex microprocessor controlled powertrain systems. The data analyzed in those studies consisted of the full spectrum of signals passing between the engine and the real-time microprocessor controller. The specific task of the classification system was to classify system operation as nominal or abnormal and to identify the fault present. The primary concern in earlier work was the identification of faults, in sensors or actuators in the powertrain system as it was exercised over its full operating range. The use of data from a variety of sources, each contributing some potentially useful information to the classification task, is commonly referred to as sensor fusion and typifies the type of problems successfully addressed using neural networks. In this work we explore the application of neural networks to a different diagnostic problem, the diagnosis of faults in newly manufactured engines and the utility of neural networks for process control.

  20. Behavioral and Physiological Neural Network Analyses: A Common Pathway toward Pattern Recognition and Prediction

    ERIC Educational Resources Information Center

    Ninness, Chris; Lauter, Judy L.; Coffee, Michael; Clary, Logan; Kelly, Elizabeth; Rumph, Marilyn; Rumph, Robin; Kyle, Betty; Ninness, Sharon K.

    2012-01-01

    Using 3 diversified datasets, we explored the pattern-recognition ability of the Self-Organizing Map (SOM) artificial neural network as applied to diversified nonlinear data distributions in the areas of behavioral and physiological research. Experiment 1 employed a dataset obtained from the UCI Machine Learning Repository. Data for this study…

  1. Modeling a Neural Network as a Teaching Tool for the Learning of the Structure-Function Relationship

    ERIC Educational Resources Information Center

    Salinas, Dino G.; Acevedo, Cristian; Gomez, Christian R.

    2010-01-01

    The authors describe an activity they have created in which students can visualize a theoretical neural network whose states evolve according to a well-known simple law. This activity provided an uncomplicated approach to a paradigm commonly represented through complex mathematical formulation. From their observations, students learned many basic…

  2. Reward-based training of recurrent neural networks for cognitive and value-based tasks

    PubMed Central

    Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing

    2017-01-01

    Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal’s internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task. DOI: http://dx.doi.org/10.7554/eLife.21492.001 PMID:28084991

  3. Study on algorithm of process neural network for soft sensing in sewage disposal system

    NASA Astrophysics Data System (ADS)

    Liu, Zaiwen; Xue, Hong; Wang, Xiaoyi; Yang, Bin; Lu, Siying

    2006-11-01

    A new method of soft sensing based on process neural network (PNN) for sewage disposal system is represented in the paper. PNN is an extension of traditional neural network, in which the inputs and outputs are time-variation. An aggregation operator is introduced to process neuron, and it makes the neuron network has the ability to deal with the information of space-time two dimensions at the same time, so the data processing enginery of biological neuron is imitated better than traditional neuron. Process neural network with the structure of three layers in which hidden layer is process neuron and input and output are common neurons for soft sensing is discussed. The intelligent soft sensing based on PNN may be used to fulfill measurement of the effluent BOD (Biochemical Oxygen Demand) from sewage disposal system, and a good training result of soft sensing was obtained by the method.

  4. An efficient optical architecture for sparsely connected neural networks

    NASA Technical Reports Server (NTRS)

    Hine, Butler P., III; Downie, John D.; Reid, Max B.

    1990-01-01

    An architecture for general-purpose optical neural network processor is presented in which the interconnections and weights are formed by directing coherent beams holographically, thereby making use of the space-bandwidth products of the recording medium for sparsely interconnected networks more efficiently that the commonly used vector-matrix multiplier, since all of the hologram area is in use. An investigation is made of the use of computer-generated holograms recorded on such updatable media as thermoplastic materials, in order to define the interconnections and weights of a neural network processor; attention is given to limits on interconnection densities, diffraction efficiencies, and weighing accuracies possible with such an updatable thin film holographic device.

  5. Application of power spectrum, cepstrum, higher order spectrum and neural network analyses for induction motor fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liang, B.; Iwnicki, S. D.; Zhao, Y.

    2013-08-01

    The power spectrum is defined as the square of the magnitude of the Fourier transform (FT) of a signal. The advantage of FT analysis is that it allows the decomposition of a signal into individual periodic frequency components and establishes the relative intensity of each component. It is the most commonly used signal processing technique today. If the same principle is applied for the detection of periodicity components in a Fourier spectrum, the process is called the cepstrum analysis. Cepstrum analysis is a very useful tool for detection families of harmonics with uniform spacing or the families of sidebands commonly found in gearbox, bearing and engine vibration fault spectra. Higher order spectra (HOS) (also known as polyspectra) consist of higher order moment of spectra which are able to detect non-linear interactions between frequency components. For HOS, the most commonly used is the bispectrum. The bispectrum is the third-order frequency domain measure, which contains information that standard power spectral analysis techniques cannot provide. It is well known that neural networks can represent complex non-linear relationships, and therefore they are extremely useful for fault identification and classification. This paper presents an application of power spectrum, cepstrum, bispectrum and neural network for fault pattern extraction of induction motors. The potential for using the power spectrum, cepstrum, bispectrum and neural network as a means for differentiating between healthy and faulty induction motor operation is examined. A series of experiments is done and the advantages and disadvantages between them are discussed. It has been found that a combination of power spectrum, cepstrum and bispectrum plus neural network analyses could be a very useful tool for condition monitoring and fault diagnosis of induction motors.

  6. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    PubMed

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  7. Recall Performance for Content-Addressable Memory Using Adiabatic Quantum Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Humble, Travis S.; McCaskey, Alex

    A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps the recurrent neural network to a commercially available quantum processing unit by taking advantage of the common underlying Ising spin model. We then assess the accuracy of the quantum processor to store key-value associations by quantifying recall performance against an ensemble of problem sets. We observe that different learning rules from the neural network community influence recallmore » accuracy but performance appears to be limited by potential noise in the processor. The strong connection established between quantum processors and neural network problems supports the growing intersection of these two ideas.« less

  8. Distinctive and common neural underpinnings of major depression, social anxiety, and their comorbidity

    PubMed Central

    Chen, Michael C.; Waugh, Christian E.; Joormann, Jutta; Gotlib, Ian H.

    2015-01-01

    Assessing neural commonalities and differences among depression, anxiety and their comorbidity is critical in developing a more integrative clinical neuroscience and in evaluating currently debated categorical vs dimensional approaches to psychiatric classification. Therefore, in this study, we sought to identify patterns of anomalous neural responding to criticism and praise that are specific to and common among major depressive disorder (MDD), social anxiety disorder (SAD) and comorbid MDD-SAD. Adult females who met formal diagnostic criteria for MDD, SAD or MDD-SAD and psychiatrically healthy participants underwent functional magnetic resonance imaging as they listened to statements directing praise or criticism at them or at another person. MDD groups showed reduced responding to praise across a distributed cortical network, an effect potentially mediated by thalamic nuclei undergirding arousal-mediated attention. SAD groups showed heightened anterior insula and decreased default-mode network response to criticism. The MDD-SAD group uniquely showed reduced responding to praise in the dorsal anterior cingulate cortex. Finally, all groups with psychopathology showed heightened response to criticism in a region of the superior frontal gyrus implicated in attentional gating. The present results suggest novel neural models of anhedonia in MDD, vigilance-withdrawal behaviors in SAD, and poorer outcome in MDD-SAD. Importantly, in identifying unique and common neural substrates of MDD and SAD, these results support a formulation in which common neural components represent general risk factors for psychopathology that, due to factors that are present at illness onset, lead to distinct forms of psychopathology with unique neural signatures. PMID:25038225

  9. Neural networks and traditional time series methods: a synergistic combination in state economic forecasts.

    PubMed

    Hansen, J V; Nelson, R D

    1997-01-01

    Ever since the initial planning for the 1997 Utah legislative session, neural-network forecasting techniques have provided valuable insights for analysts forecasting tax revenues. These revenue estimates are critically important since agency budgets, support for education, and improvements to infrastructure all depend on their accuracy. Underforecasting generates windfalls that concern taxpayers, whereas overforecasting produces budget shortfalls that cause inadequately funded commitments. The pattern finding ability of neural networks gives insightful and alternative views of the seasonal and cyclical components commonly found in economic time series data. Two applications of neural networks to revenue forecasting clearly demonstrate how these models complement traditional time series techniques. In the first, preoccupation with a potential downturn in the economy distracts analysis based on traditional time series methods so that it overlooks an emerging new phenomenon in the data. In this case, neural networks identify the new pattern that then allows modification of the time series models and finally gives more accurate forecasts. In the second application, data structure found by traditional statistical tools allows analysts to provide neural networks with important information that the networks then use to create more accurate models. In summary, for the Utah revenue outlook, the insights that result from a portfolio of forecasts that includes neural networks exceeds the understanding generated from strictly statistical forecasting techniques. In this case, the synergy clearly results in the whole of the portfolio of forecasts being more accurate than the sum of the individual parts.

  10. A multivariate extension of mutual information for growing neural networks.

    PubMed

    Ball, Kenneth R; Grant, Christopher; Mundy, William R; Shafer, Timothy J

    2017-11-01

    Recordings of neural network activity in vitro are increasingly being used to assess the development of neural network activity and the effects of drugs, chemicals and disease states on neural network function. The high-content nature of the data derived from such recordings can be used to infer effects of compounds or disease states on a variety of important neural functions, including network synchrony. Historically, synchrony of networks in vitro has been assessed either by determination of correlation coefficients (e.g. Pearson's correlation), by statistics estimated from cross-correlation histograms between pairs of active electrodes, and/or by pairwise mutual information and related measures. The present study examines the application of Normalized Multiinformation (NMI) as a scalar measure of shared information content in a multivariate network that is robust with respect to changes in network size. Theoretical simulations are designed to investigate NMI as a measure of complexity and synchrony in a developing network relative to several alternative approaches. The NMI approach is applied to these simulations and also to data collected during exposure of in vitro neural networks to neuroactive compounds during the first 12 days in vitro, and compared to other common measures, including correlation coefficients and mean firing rates of neurons. NMI is shown to be more sensitive to developmental effects than first order synchronous and nonsynchronous measures of network complexity. Finally, NMI is a scalar measure of global (rather than pairwise) mutual information in a multivariate network, and hence relies on less assumptions for cross-network comparisons than historical approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. The Reference Ability Neural Network Study: Life-time stability of reference-ability neural networks derived from task maps of young adults.

    PubMed

    Habeck, C; Gazes, Y; Razlighi, Q; Steffener, J; Brickman, A; Barulli, D; Salthouse, T; Stern, Y

    2016-01-15

    Analyses of large test batteries administered to individuals ranging from young to old have consistently yielded a set of latent variables representing reference abilities (RAs) that capture the majority of the variance in age-related cognitive change: Episodic Memory, Fluid Reasoning, Perceptual Processing Speed, and Vocabulary. In a previous paper (Stern et al., 2014), we introduced the Reference Ability Neural Network Study, which administers 12 cognitive neuroimaging tasks (3 for each RA) to healthy adults age 20-80 in order to derive unique neural networks underlying these 4 RAs and investigate how these networks may be affected by aging. We used a multivariate approach, linear indicator regression, to derive a unique covariance pattern or Reference Ability Neural Network (RANN) for each of the 4 RAs. The RANNs were derived from the neural task data of 64 younger adults of age 30 and below. We then prospectively applied the RANNs to fMRI data from the remaining sample of 227 adults of age 31 and above in order to classify each subject-task map into one of the 4 possible reference domains. Overall classification accuracy across subjects in the sample age 31 and above was 0.80±0.18. Classification accuracy by RA domain was also good, but variable; memory: 0.72±0.32; reasoning: 0.75±0.35; speed: 0.79±0.31; vocabulary: 0.94±0.16. Classification accuracy was not associated with cross-sectional age, suggesting that these networks, and their specificity to the respective reference domain, might remain intact throughout the age range. Higher mean brain volume was correlated with increased overall classification accuracy; better overall performance on the tasks in the scanner was also associated with classification accuracy. For the RANN network scores, we observed for each RANN that a higher score was associated with a higher corresponding classification accuracy for that reference ability. Despite the absence of behavioral performance information in the derivation of these networks, we also observed some brain-behavioral correlations, notably for the fluid-reasoning network whose network score correlated with performance on the memory and fluid-reasoning tasks. While age did not influence the expression of this RANN, the slope of the association between network score and fluid-reasoning performance was negatively associated with higher ages. These results provide support for the hypothesis that a set of specific, age-invariant neural networks underlies these four RAs, and that these networks maintain their cognitive specificity and level of intensity across age. Activation common to all 12 tasks was identified as another activation pattern resulting from a mean-contrast Partial-Least-Squares technique. This common pattern did show associations with age and some subject demographics for some of the reference domains, lending support to the overall conclusion that aspects of neural processing that are specific to any cognitive reference ability stay constant across age, while aspects that are common to all reference abilities differ across age. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks

    PubMed Central

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  13. Predicting physical time series using dynamic ridge polynomial neural networks.

    PubMed

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  14. Pharmacological Tools to Study the Role of Astrocytes in Neural Network Functions.

    PubMed

    Peña-Ortega, Fernando; Rivera-Angulo, Ana Julia; Lorea-Hernández, Jonathan Julio

    2016-01-01

    Despite that astrocytes and microglia do not communicate by electrical impulses, they can efficiently communicate among them, with each other and with neurons, to participate in complex neural functions requiring broad cell-communication and long-lasting regulation of brain function. Glial cells express many receptors in common with neurons; secrete gliotransmitters as well as neurotrophic and neuroinflammatory factors, which allow them to modulate synaptic transmission and neural excitability. All these properties allow glial cells to influence the activity of neuronal networks. Thus, the incorporation of glial cell function into the understanding of nervous system dynamics will provide a more accurate view of brain function. Our current knowledge of glial cell biology is providing us with experimental tools to explore their participation in neural network modulation. In this chapter, we review some of the classical, as well as some recent, pharmacological tools developed for the study of astrocyte's influence in neural function. We also provide some examples of the use of these pharmacological agents to understand the role of astrocytes in neural network function and dysfunction.

  15. Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks

    NASA Astrophysics Data System (ADS)

    Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li

    2016-06-01

    Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.

  16. RBF neural network based PI pitch controller for a class of 5-MW wind turbines using particle swarm optimization algorithm.

    PubMed

    Poultangari, Iman; Shahnazi, Reza; Sheikhan, Mansour

    2012-09-01

    In order to control the pitch angle of blades in wind turbines, commonly the proportional and integral (PI) controller due to its simplicity and industrial usability is employed. The neural networks and evolutionary algorithms are tools that provide a suitable ground to determine the optimal PI gains. In this paper, a radial basis function (RBF) neural network based PI controller is proposed for collective pitch control (CPC) of a 5-MW wind turbine. In order to provide an optimal dataset to train the RBF neural network, particle swarm optimization (PSO) evolutionary algorithm is used. The proposed method does not need the complexities, nonlinearities and uncertainties of the system under control. The simulation results show that the proposed controller has satisfactory performance. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Increasingly complex representations of natural movies across the dorsal stream are shared between subjects.

    PubMed

    Güçlü, Umut; van Gerven, Marcel A J

    2017-01-15

    Recently, deep neural networks (DNNs) have been shown to provide accurate predictions of neural responses across the ventral visual pathway. We here explore whether they also provide accurate predictions of neural responses across the dorsal visual pathway, which is thought to be devoted to motion processing and action recognition. This is achieved by training deep neural networks to recognize actions in videos and subsequently using them to predict neural responses while subjects are watching natural movies. Moreover, we explore whether dorsal stream representations are shared between subjects. In order to address this question, we examine if individual subject predictions can be made in a common representational space estimated via hyperalignment. Results show that a DNN trained for action recognition can be used to accurately predict how dorsal stream responds to natural movies, revealing a correspondence in representations of DNN layers and dorsal stream areas. It is also demonstrated that models operating in a common representational space can generalize to responses of multiple or even unseen individual subjects to novel spatio-temporal stimuli in both encoding and decoding settings, suggesting that a common representational space underlies dorsal stream responses across multiple subjects. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Construction of a pulse-coupled dipole network capable of fear-like and relief-like responses

    NASA Astrophysics Data System (ADS)

    Lungsi Sharma, B.

    2016-07-01

    The challenge for neuroscience as an interdisciplinary programme is the integration of ideas among the disciplines to achieve a common goal. This paper deals with the problem of deriving a pulse-coupled neural network that is capable of demonstrating behavioural responses (fear-like and relief-like). Current pulse-coupled neural networks are designed mostly for engineering applications, particularly image processing. The discovered neural network was constructed using the method of minimal anatomies approach. The behavioural response of a level-coded activity-based model was used as a reference. Although the spiking-based model and the activity-based model are of different scales, the use of model-reference principle means that the characteristics that is referenced is its functional properties. It is demonstrated that this strategy of dissection and systematic construction is effective in the functional design of pulse-coupled neural network system with nonlinear signalling. The differential equations for the elastic weights in the reference model are replicated in the pulse-coupled network geometrically. The network reflects a possible solution to the problem of punishment and avoidance. The network developed in this work is a new network topology for pulse-coupled neural networks. Therefore, the model-reference principle is a powerful tool in connecting neuroscience disciplines. The continuity of concepts and phenomena is further maintained by systematic construction using methods like the method of minimal anatomies.

  19. Distinctive and common neural underpinnings of major depression, social anxiety, and their comorbidity.

    PubMed

    Hamilton, J Paul; Chen, Michael C; Waugh, Christian E; Joormann, Jutta; Gotlib, Ian H

    2015-04-01

    Assessing neural commonalities and differences among depression, anxiety and their comorbidity is critical in developing a more integrative clinical neuroscience and in evaluating currently debated categorical vs dimensional approaches to psychiatric classification. Therefore, in this study, we sought to identify patterns of anomalous neural responding to criticism and praise that are specific to and common among major depressive disorder (MDD), social anxiety disorder (SAD) and comorbid MDD-SAD. Adult females who met formal diagnostic criteria for MDD, SAD or MDD-SAD and psychiatrically healthy participants underwent functional magnetic resonance imaging as they listened to statements directing praise or criticism at them or at another person. MDD groups showed reduced responding to praise across a distributed cortical network, an effect potentially mediated by thalamic nuclei undergirding arousal-mediated attention. SAD groups showed heightened anterior insula and decreased default-mode network response to criticism. The MDD-SAD group uniquely showed reduced responding to praise in the dorsal anterior cingulate cortex. Finally, all groups with psychopathology showed heightened response to criticism in a region of the superior frontal gyrus implicated in attentional gating. The present results suggest novel neural models of anhedonia in MDD, vigilance-withdrawal behaviors in SAD, and poorer outcome in MDD-SAD. Importantly, in identifying unique and common neural substrates of MDD and SAD, these results support a formulation in which common neural components represent general risk factors for psychopathology that, due to factors that are present at illness onset, lead to distinct forms of psychopathology with unique neural signatures. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. A review and analysis of neural networks for classification of remotely sensed multispectral imagery

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1993-01-01

    A literature survey and analysis of the use of neural networks for the classification of remotely sensed multispectral imagery is presented. As part of a brief mathematical review, the backpropagation algorithm, which is the most common method of training multi-layer networks, is discussed with an emphasis on its application to pattern recognition. The analysis is divided into five aspects of neural network classification: (1) input data preprocessing, structure, and encoding; (2) output encoding and extraction of classes; (3) network architecture, (4) training algorithms; and (5) comparisons to conventional classifiers. The advantages of the neural network method over traditional classifiers are its non-parametric nature, arbitrary decision boundary capabilities, easy adaptation to different types of data and input structures, fuzzy output values that can enhance classification, and good generalization for use with multiple images. The disadvantages of the method are slow training time, inconsistent results due to random initial weights, and the requirement of obscure initialization values (e.g., learning rate and hidden layer size). Possible techniques for ameliorating these problems are discussed. It is concluded that, although the neural network method has several unique capabilities, it will become a useful tool in remote sensing only if it is made faster, more predictable, and easier to use.

  1. Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.

    PubMed

    Orhan, A Emin; Ma, Wei Ji

    2017-07-26

    Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.

  2. Common neural correlates of intertemporal choices and intelligence in adolescents.

    PubMed

    Ripke, Stephan; Hübner, Thomas; Mennigen, Eva; Müller, Kathrin U; Li, Shu-Chen; Smolka, Michael N

    2015-02-01

    Converging behavioral evidence indicates that temporal discounting, measured by intertemporal choice tasks, is inversely related to intelligence. At the neural level, the parieto-frontal network is pivotal for complex, higher-order cognitive processes. Relatedly, underrecruitment of the pFC during a working memory task has been found to be associated with steeper temporal discounting. Furthermore, this network has also been shown to be related to the consistency of intertemporal choices. Here we report an fMRI study that directly investigated the association of neural correlates of intertemporal choice behavior with intelligence in an adolescent sample (n = 206; age 13.7-15.5 years). After identifying brain regions where the BOLD response during intertemporal choice was correlated with individual differences in intelligence, we further tested whether BOLD responses in these areas would mediate the associations between intelligence, the discounting rate, and choice consistency. We found positive correlations between BOLD response in a value-independent decision network (i.e., dorsolateral pFC, precuneus, and occipital areas) and intelligence. Furthermore, BOLD response in a value-dependent decision network (i.e., perigenual ACC, inferior frontal gyrus, ventromedial pFC, ventral striatum) was positively correlated with intelligence. The mediation analysis revealed that BOLD responses in the value-independent network mediated the association between intelligence and choice consistency, whereas BOLD responses in the value-dependent network mediated the association between intelligence and the discounting rate. In summary, our findings provide evidence for common neural correlates of intertemporal choice and intelligence, possibly linked by valuation as well as executive functions.

  3. Centralized and decentralized global outer-synchronization of asymmetric recurrent time-varying neural network by data-sampling.

    PubMed

    Lu, Wenlian; Zheng, Ren; Chen, Tianping

    2016-03-01

    In this paper, we discuss outer-synchronization of the asymmetrically connected recurrent time-varying neural networks. By using both centralized and decentralized discretization data sampling principles, we derive several sufficient conditions based on three vector norms to guarantee that the difference of any two trajectories starting from different initial values of the neural network converges to zero. The lower bounds of the common time intervals between data samples in centralized and decentralized principles are proved to be positive, which guarantees exclusion of Zeno behavior. A numerical example is provided to illustrate the efficiency of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. A hybrid modeling approach for option pricing

    NASA Astrophysics Data System (ADS)

    Hajizadeh, Ehsan; Seifi, Abbas

    2011-11-01

    The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.

  5. Injured Brains and Adaptive Networks: The Benefits and Costs of Hyperconnectivity.

    PubMed

    Hillary, Frank G; Grafman, Jordan H

    2017-05-01

    A common finding in human functional brain-imaging studies is that damage to neural systems paradoxically results in enhanced functional connectivity between network regions, a phenomenon commonly referred to as 'hyperconnectivity'. Here, we describe the various ways that hyperconnectivity operates to benefit a neural network following injury while simultaneously negotiating the trade-off between metabolic cost and communication efficiency. Hyperconnectivity may be optimally expressed by increasing connections through the most central and metabolically efficient regions (i.e., hubs). While adaptive in the short term, we propose that chronic hyperconnectivity may leave network hubs vulnerable to secondary pathological processes over the life span due to chronically elevated metabolic stress. We conclude by offering novel, testable hypotheses for advancing our understanding of the role of hyperconnectivity in systems-level brain plasticity in neurological disorders. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Noise in genetic and neural networks

    NASA Astrophysics Data System (ADS)

    Swain, Peter S.; Longtin, André

    2006-06-01

    Both neural and genetic networks are significantly noisy, and stochastic effects in both cases ultimately arise from molecular events. Nevertheless, a gulf exists between the two fields, with researchers in one often being unaware of similar work in the other. In this Special Issue, we focus on bridging this gap and present a collection of papers from both fields together. For each field, the networks studied range from just a single gene or neuron to endogenous networks. In this introductory article, we describe the sources of noise in both genetic and neural systems. We discuss the modeling techniques in each area and point out similarities. We hope that, by reading both sets of papers, ideas developed in one field will give insight to scientists from the other and that a common language and methodology will develop.

  7. Overview of artificial neural networks.

    PubMed

    Zou, Jinming; Han, Yi; So, Sung-Sau

    2008-01-01

    The artificial neural network (ANN), or simply neural network, is a machine learning method evolved from the idea of simulating the human brain. The data explosion in modem drug discovery research requires sophisticated analysis methods to uncover the hidden causal relationships between single or multiple responses and a large set of properties. The ANN is one of many versatile tools to meet the demand in drug discovery modeling. Compared to a traditional regression approach, the ANN is capable of modeling complex nonlinear relationships. The ANN also has excellent fault tolerance and is fast and highly scalable with parallel processing. This chapter introduces the background of ANN development and outlines the basic concepts crucially important for understanding more sophisticated ANN. Several commonly used learning methods and network setups are discussed briefly at the end of the chapter.

  8. Neural organization of linguistic short-term memory is sensory modality-dependent: evidence from signed and spoken language.

    PubMed

    Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory

    2008-12-01

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  9. Statistical methods and neural network approaches for classification of data from multiple sources

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon Atli; Swain, Philip H.

    1990-01-01

    Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.

  10. Using Artificial Neural Networks to Predict the Presence of Overpressured Zones in the Anadarko Basin, Oklahoma

    NASA Astrophysics Data System (ADS)

    Cranganu, Constantin

    2007-10-01

    Many sedimentary basins throughout the world exhibit areas with abnormal pore-fluid pressures (higher or lower than normal or hydrostatic pressure). Predicting pore pressure and other parameters (depth, extension, magnitude, etc.) in such areas are challenging tasks. The compressional acoustic (sonic) log (DT) is often used as a predictor because it responds to changes in porosity or compaction produced by abnormal pore-fluid pressures. Unfortunately, the sonic log is not commonly recorded in most oil and/or gas wells. We propose using an artificial neural network to synthesize sonic logs by identifying the mathematical dependency between DT and the commonly available logs, such as normalized gamma ray (GR) and deep resistivity logs (REID). The artificial neural network process can be divided into three steps: (1) Supervised training of the neural network; (2) confirmation and validation of the model by blind-testing the results in wells that contain both the predictor (GR, REID) and the target values (DT) used in the supervised training; and 3) applying the predictive model to all wells containing the required predictor data and verifying the accuracy of the synthetic DT data by comparing the back-predicted synthetic predictor curves (GRNN, REIDNN) to the recorded predictor curves used in training (GR, REID). Artificial neural networks offer significant advantages over traditional deterministic methods. They do not require a precise mathematical model equation that describes the dependency between the predictor values and the target values and, unlike linear regression techniques, neural network methods do not overpredict mean values and thereby preserve original data variability. One of their most important advantages is that their predictions can be validated and confirmed through back-prediction of the input data. This procedure was applied to predict the presence of overpressured zones in the Anadarko Basin, Oklahoma. The results are promising and encouraging.

  11. Neural network simulation of the atmospheric point spread function for the adjacency effect research

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoshan; Wang, Haidong; Li, Ligang; Yang, Zhen; Meng, Xin

    2016-10-01

    Adjacency effect could be regarded as the convolution of the atmospheric point spread function (PSF) and the surface leaving radiance. Monte Carlo is a common method to simulate the atmospheric PSF. But it can't obtain analytic expression and the meaningful results can be only acquired by statistical analysis of millions of data. A backward Monte Carlo algorithm was employed to simulate photon emitting and propagating in the atmosphere under different conditions. The PSF was determined by recording the photon-receiving numbers in fixed bin at different position. A multilayer feed-forward neural network with a single hidden layer was designed to learn the relationship between the PSF's and the input condition parameters. The neural network used the back-propagation learning rule for training. Its input parameters involved atmosphere condition, spectrum range, observing geometry. The outputs of the network were photon-receiving numbers in the corresponding bin. Because the output units were too many to be allowed by neural network, the large network was divided into a collection of smaller ones. These small networks could be ran simultaneously on many workstations and/or PCs to speed up the training. It is important to note that the simulated PSF's by Monte Carlo technique in non-nadir viewing angles are more complicated than that in nadir conditions which brings difficulties in the design of the neural network. The results obtained show that the neural network approach could be very useful to compute the atmospheric PSF based on the simulated data generated by Monte Carlo method.

  12. On the complexity of neural network classifiers: a comparison between shallow and deep architectures.

    PubMed

    Bianchini, Monica; Scarselli, Franco

    2014-08-01

    Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.

  13. Sentiment analysis: a comparison of deep learning neural network algorithm with SVM and naϊve Bayes for Indonesian text

    NASA Astrophysics Data System (ADS)

    Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia

    2018-03-01

    Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.

  14. Neural network method for lossless two-conductor transmission line equations based on the IELM algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yunlei; Hou, Muzhou; Luo, Jianshu; Liu, Taohua

    2018-06-01

    With the increasing demands for vast amounts of data and high-speed signal transmission, the use of multi-conductor transmission lines is becoming more common. The impact of transmission lines on signal transmission is thus a key issue affecting the performance of high-speed digital systems. To solve the problem of lossless two-conductor transmission line equations (LTTLEs), a neural network model and algorithm are explored in this paper. By selecting the product of two triangular basis functions as the activation function of hidden layer neurons, we can guarantee the separation of time, space, and phase orthogonality. By adding the initial condition to the neural network, an improved extreme learning machine (IELM) algorithm for solving the network weight is obtained. This is different to the traditional method for converting the initial condition into the iterative constraint condition. Calculation software for solving the LTTLEs based on the IELM algorithm is developed. Numerical experiments show that the results are consistent with those of the traditional method. The proposed neural network algorithm can find the terminal voltage of the transmission line and also the voltage of any observation point. It is possible to calculate the value at any given point by using the neural network model to solve the transmission line equation.

  15. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  16. A common brain network among state, trait, and pathological anxiety from whole-brain functional connectivity.

    PubMed

    Takagi, Yu; Sakai, Yuki; Abe, Yoshinari; Nishida, Seiji; Harrison, Ben J; Martínez-Zalacaín, Ignacio; Soriano-Mas, Carles; Narumoto, Jin; Tanaka, Saori C

    2018-05-15

    Anxiety is one of the most common mental states of humans. Although it drives us to avoid frightening situations and to achieve our goals, it may also impose significant suffering and burden if it becomes extreme. Because we experience anxiety in a variety of forms, previous studies investigated neural substrates of anxiety in a variety of ways. These studies revealed that individuals with high state, trait, or pathological anxiety showed altered neural substrates. However, no studies have directly investigated whether the different dimensions of anxiety share a common neural substrate, despite its theoretical and practical importance. Here, we investigated a brain network of anxiety shared by different dimensions of anxiety in a unified analytical framework using functional magnetic resonance imaging (fMRI). We analyzed different datasets in a single scale, which was defined by an anxiety-related brain network derived from whole brain. We first conducted the anxiety provocation task with healthy participants who tended to feel anxiety related to obsessive-compulsive disorder (OCD) in their daily life. We found a common state anxiety brain network across participants (1585 trials obtained from 10 participants). Then, using the resting-state fMRI in combination with the participants' behavioral trait anxiety scale scores (879 participants from the Human Connectome Project), we demonstrated that trait anxiety shared the same brain network as state anxiety. Furthermore, the brain network between common to state and trait anxiety could detect patients with OCD, which is characterized by pathological anxiety-driven behaviors (174 participants from multi-site datasets). Our findings provide direct evidence that different dimensions of anxiety have a substantial biological inter-relationship. Our results also provide a biologically defined dimension of anxiety, which may promote further investigation of various human characteristics, including psychiatric disorders, from the perspective of anxiety. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Detecting atrial fibrillation by deep convolutional neural networks.

    PubMed

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. F77NNS - A FORTRAN-77 NEURAL NETWORK SIMULATOR

    NASA Technical Reports Server (NTRS)

    Mitchell, P. H.

    1994-01-01

    F77NNS (A FORTRAN-77 Neural Network Simulator) simulates the popular back error propagation neural network. F77NNS is an ANSI-77 FORTRAN program designed to take advantage of vectorization when run on machines having this capability, but it will run on any computer with an ANSI-77 FORTRAN Compiler. Artificial neural networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to biological nerve cells. Problems which involve pattern matching or system modeling readily fit the class of problems which F77NNS is designed to solve. The program's formulation trains a neural network using Rumelhart's back-propagation algorithm. Typically the nodes of a network are grouped together into clumps called layers. A network will generally have an input layer through which the various environmental stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. The back-propagation training algorithm can require massive computational resources to implement a large network such as a network capable of learning text-to-phoneme pronunciation rules as in the famous Sehnowski experiment. The Sehnowski neural network learns to pronounce 1000 common English words. The standard input data defines the specific inputs that control the type of run to be made, and input files define the NN in terms of the layers and nodes, as well as the input/output (I/O) pairs. The program has a restart capability so that a neural network can be solved in stages suitable to the user's resources and desires. F77NNS allows the user to customize the patterns of connections between layers of a network. The size of the neural network to be solved is limited only by the amount of random access memory (RAM) available to the user. The program has a memory requirement of about 900K. The standard distribution medium for this package is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. F77NNS was developed in 1989.

  19. Force Field for Water Based on Neural Network.

    PubMed

    Wang, Hao; Yang, Weitao

    2018-05-18

    We developed a novel neural network based force field for water based on training with high level ab initio theory. The force field was built based on electrostatically embedded many-body expansion method truncated at binary interactions. Many-body expansion method is a common strategy to partition the total Hamiltonian of large systems into a hierarchy of few-body terms. Neural networks were trained to represent electrostatically embedded one-body and two-body interactions, which require as input only one and two water molecule calculations at the level of ab initio electronic structure method CCSD/aug-cc-pVDZ embedded in the molecular mechanics water environment, making it efficient as a general force field construction approach. Structural and dynamic properties of liquid water calculated with our force field show good agreement with experimental results. We constructed two sets of neural network based force fields: non-polarizable and polarizable force fields. Simulation results show that the non-polarizable force field using fixed TIP3P charges has already behaved well, since polarization effects and many-body effects are implicitly included due to the electrostatic embedding scheme. Our results demonstrate that the electrostatically embedded many-body expansion combined with neural network provides a promising and systematic way to build the next generation force fields at high accuracy and low computational costs, especially for large systems.

  20. Applying Gradient Descent in Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  1. Artificial neural network predictions of lengths of stay on a post-coronary care unit.

    PubMed

    Mobley, B A; Leasure, R; Davidson, L

    1995-01-01

    To create and validate a model that predicts length of hospital unit stay. Ex post facto. Seventy-four independent admission variables in 15 general categories were utilized to predict possible stays of 1 to 20 days. Laboratory. Records of patients discharged from a post-coronary care unit in early 1993. An artificial neural network was trained on 629 records and tested on an additional 127 records of patients. The absolute disparity between the actual lengths of stays in the test records and the predictions of the network averaged 1.4 days per record, and the actual length of stay was predicted within 1 day 72% of the time. The artificial neural network demonstrated the capacity to utilize common patient admission characteristics to predict lengths of stay. This technology shows promise in aiding timely initiation of treatment and effective resource planning and cost control.

  2. Entanglement in a quantum neural network based on quantum dots

    NASA Astrophysics Data System (ADS)

    Altaisky, M. V.; Zolnikova, N. N.; Kaputkina, N. E.; Krylov, V. A.; Lozovik, Yu E.; Dattani, N. S.

    2017-05-01

    We studied the quantum correlations between the nodes in a quantum neural network built of an array of quantum dots with dipole-dipole interaction. By means of the quasiadiabatic path integral simulation of the density matrix evolution in a presence of the common phonon bath we have shown the coherence in such system can survive up to the liquid nitrogen temperature of 77 K and above. The quantum correlations between quantum dots are studied by means of calculation of the entanglement of formation in a pair of quantum dots with the typical dot size of a few nanometers and interdot distance of the same order. We have shown that the proposed quantum neural network can keep the mixture of entangled states of QD pairs up to the above mentioned high temperatures.

  3. Probabilistic and Other Neural Nets in Multi-Hole Probe Calibration and Flow Angularity Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Baskaran, Subbiah; Ramachandran, Narayanan; Noever, David

    1998-01-01

    The use of probabilistic (PNN) and multilayer feed forward (MLFNN) neural networks are investigated for calibration of multi-hole pressure probes and the prediction of associated flow angularity patterns in test flow fields. Both types of networks are studied in detail for their calibration and prediction characteristics. The current formalism can be applied to any multi-hole probe, however the test results for the most commonly used five-hole Cone and Prism probe types alone are reported in this article.

  4. ANNS An X Window Based Version of the AFIT Neural Network Simulator

    DTIC Science & Technology

    1993-06-01

    programer or user can view the dy- namic behavior of an algorithm and its changes of learning state while the neural network paradigms or algorithms...an object as "something you can do things to. An object has state, behavior , and identity, the structure and behavior of similar objects are defined in...their common class. The terms instance and object are interchangeable" [5:516]. The behavior of an object is "characterized by the actions that it

  5. Higher-order neural networks, Polyà polynomials, and Fermi cluster diagrams

    NASA Astrophysics Data System (ADS)

    Kürten, K. E.; Clark, J. W.

    2003-09-01

    The problem of controlling higher-order interactions in neural networks is addressed with techniques commonly applied in the cluster analysis of quantum many-particle systems. For multineuron synaptic weights chosen according to a straightforward extension of the standard Hebbian learning rule, we show that higher-order contributions to the stimulus felt by a given neuron can be readily evaluated via Polyà’s combinatoric group-theoretical approach or equivalently by exploiting a precise formal analogy with fermion diagrammatics.

  6. Comparing Apples and Oranges: Using Reward-Specific and Reward-General Subjective Value Representation in the Brain

    PubMed Central

    Glimcher, Paul W.

    2011-01-01

    The ability of human subjects to choose between disparate kinds of rewards suggests that the neural circuits for valuing different reward types must converge. Economic theory suggests that these convergence points represent the subjective values (SVs) of different reward types on a common scale for comparison. To examine these hypotheses and to map the neural circuits for reward valuation we had food and water-deprived subjects make risky choices for money, food, and water both in and out of a brain scanner. We found that risk preferences across reward types were highly correlated; the level of risk aversion an individual showed when choosing among monetary lotteries predicted their risk aversion toward food and water. We also found that partially distinct neural networks represent the SVs of monetary and food rewards and that these distinct networks showed specific convergence points. The hypothalamic region mainly represented the SV for food, and the posterior cingulate cortex mainly represented the SV for money. In both the ventromedial prefrontal cortex (vmPFC) and striatum there was a common area representing the SV of both reward types, but only the vmPFC significantly represented the SVs of money and food on a common scale appropriate for choice in our data set. A correlation analysis demonstrated interactions across money and food valuation areas and the common areas in the vmPFC and striatum. This may suggest that partially distinct valuation networks for different reward types converge on a unified valuation network, which enables a direct comparison between different reward types and hence guides valuation and choice. PMID:21994386

  7. Neural networks for dimensionality reduction of fluorescence spectra and prediction of drinking water disinfection by-products.

    PubMed

    Peleato, Nicolas M; Legge, Raymond L; Andrews, Robert C

    2018-06-01

    The use of fluorescence data coupled with neural networks for improved predictability of drinking water disinfection by-products (DBPs) was investigated. Novel application of autoencoders to process high-dimensional fluorescence data was related to common dimensionality reduction techniques of parallel factors analysis (PARAFAC) and principal component analysis (PCA). The proposed method was assessed based on component interpretability as well as for prediction of organic matter reactivity to formation of DBPs. Optimal prediction accuracies on a validation dataset were observed with an autoencoder-neural network approach or by utilizing the full spectrum without pre-processing. Latent representation by an autoencoder appeared to mitigate overfitting when compared to other methods. Although DBP prediction error was minimized by other pre-processing techniques, PARAFAC yielded interpretable components which resemble fluorescence expected from individual organic fluorophores. Through analysis of the network weights, fluorescence regions associated with DBP formation can be identified, representing a potential method to distinguish reactivity between fluorophore groupings. However, distinct results due to the applied dimensionality reduction approaches were observed, dictating a need for considering the role of data pre-processing in the interpretability of the results. In comparison to common organic measures currently used for DBP formation prediction, fluorescence was shown to improve prediction accuracies, with improvements to DBP prediction best realized when appropriate pre-processing and regression techniques were applied. The results of this study show promise for the potential application of neural networks to best utilize fluorescence EEM data for prediction of organic matter reactivity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Visuospatial Processing in Children with Neurofibromatosis Type 1

    ERIC Educational Resources Information Center

    Clements-Stephens, Amy M.; Rimrodt, Sheryl L.; Gaur, Pooja; Cutting, Laurie E.

    2008-01-01

    Neuroimaging studies investigating the neural network of visuospatial processing have revealed a right hemisphere network of activation including inferior parietal lobe, dorsolateral prefrontal cortex, and extrastriate regions. Impaired visuospatial processing, indicated by the Judgment of Line Orientation (JLO), is commonly seen in individuals…

  9. Artificial Neural Network for Total Laboratory Automation to Improve the Management of Sample Dilution.

    PubMed

    Ialongo, Cristiano; Pieri, Massimo; Bernardini, Sergio

    2017-02-01

    Diluting a sample to obtain a measure within the analytical range is a common task in clinical laboratories. However, for urgent samples, it can cause delays in test reporting, which can put patients' safety at risk. The aim of this work is to show a simple artificial neural network that can be used to make it unnecessary to predilute a sample using the information available through the laboratory information system. Particularly, the Multilayer Perceptron neural network built on a data set of 16,106 cardiac troponin I test records produced a correct inference rate of 100% for samples not requiring predilution and 86.2% for those requiring predilution. With respect to the inference reliability, the most relevant inputs were the presence of a cardiac event or surgery and the result of the previous assay. Therefore, such an artificial neural network can be easily implemented into a total automation framework to sensibly reduce the turnaround time of critical orders delayed by the operation required to retrieve, dilute, and retest the sample.

  10. QSRR using evolved artificial neural network for 52 common pharmaceuticals and drugs of abuse in hair from UPLC-TOF-MS.

    PubMed

    Noorizadeh, Hadi; Farmany, Abbas; Narimani, Hojat; Noorizadeh, Mehrab

    2013-05-01

    A quantitative structure-retention relationship (QSRR) study based on an artificial neural network (ANN) was carried out for the prediction of the ultra-performance liquid chromatography-Time-of-Flight mass spectrometry (UPLC-TOF-MS) retention time (RT) of a set of 52 pharmaceuticals and drugs of abuse in hair. The genetic algorithm was used as a variable selection tool. A partial least squares (PLS) method was used to select the best descriptors which were used as input neurons in neural network model. For choosing the best predictive model from among comparable models, square correlation coefficient R(2) for the whole set calculated based on leave-group-out predicted values of the training set and model-derived predicted values for the test set compounds is suggested to be a good criterion. Finally, to improve the results, structure-retention relationships were followed by a non-linear approach using artificial neural networks and consequently better results were obtained. This also demonstrates the advantages of ANN. Copyright © 2011 John Wiley & Sons, Ltd.

  11. Neuronify: An Educational Simulator for Neural Circuits.

    PubMed

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Våvang Solbrå, Andreas; Tennøe, Simen; Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne; Hafting, Torkel; Einevoll, Gaute T

    2017-01-01

    Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux).

  12. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    PubMed

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  13. Neuronify: An Educational Simulator for Neural Circuits

    PubMed Central

    Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne

    2017-01-01

    Abstract Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux). PMID:28321440

  14. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    PubMed Central

    Cheung, Kit; Schultz, Simon R.; Luk, Wayne

    2016-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  15. Uncovering the neuroanatomical correlates of cognitive, affective and conative theory of mind in paediatric traumatic brain injury: a neural systems perspective

    PubMed Central

    Catroppa, Cathy; Beare, Richard; Silk, Timothy J.; Hearps, Stephen J.; Beauchamp, Miriam H.; Yeates, Keith O.; Anderson, Vicki A.

    2017-01-01

    Abstract Deficits in theory of mind (ToM) are common after neurological insult acquired in the first and second decade of life, however the contribution of large-scale neural networks to ToM deficits in children with brain injury is unclear. Using paediatric traumatic brain injury (TBI) as a model, this study investigated the sub-acute effect of paediatric traumatic brain injury on grey-matter volume of three large-scale, domain-general brain networks (the Default Mode Network, DMN; the Central Executive Network, CEN; and the Salience Network, SN), as well as two domain-specific neural networks implicated in social-affective processes (the Cerebro-Cerebellar Mentalizing Network, CCMN and the Mirror Neuron/Empathy Network, MNEN). We also evaluated prospective structure–function relationships between these large-scale neural networks and cognitive, affective and conative ToM. 3D T1- weighted magnetic resonance imaging sequences were acquired sub-acutely in 137 children [TBI: n = 103; typically developing (TD) children: n = 34]. All children were assessed on measures of ToM at 24-months post-injury. Children with severe TBI showed sub-acute volumetric reductions in the CCMN, SN, MNEN, CEN and DMN, as well as reduced grey-matter volumes of several hub regions of these neural networks. Volumetric reductions in the CCMN and several of its hub regions, including the cerebellum, predicted poorer cognitive ToM. In contrast, poorer affective and conative ToM were predicted by volumetric reductions in the SN and MNEN, respectively. Overall, results suggest that cognitive, affective and conative ToM may be prospectively predicted by individual differences in structure of different neural systems—the CCMN, SN and MNEN, respectively. The prospective relationship between cerebellar volume and cognitive ToM outcomes is a novel finding in our paediatric brain injury sample and suggests that the cerebellum may play a role in the neural networks important for ToM. These findings are discussed in relation to neurocognitive models of ToM. We conclude that detection of sub-acute volumetric abnormalities of large-scale neural networks and their hub regions may aid in the early identification of children at risk for chronic social-cognitive impairment. PMID:28505355

  16. Uncovering the neuroanatomical correlates of cognitive, affective and conative theory of mind in paediatric traumatic brain injury: a neural systems perspective.

    PubMed

    Ryan, Nicholas P; Catroppa, Cathy; Beare, Richard; Silk, Timothy J; Hearps, Stephen J; Beauchamp, Miriam H; Yeates, Keith O; Anderson, Vicki A

    2017-09-01

    Deficits in theory of mind (ToM) are common after neurological insult acquired in the first and second decade of life, however the contribution of large-scale neural networks to ToM deficits in children with brain injury is unclear. Using paediatric traumatic brain injury (TBI) as a model, this study investigated the sub-acute effect of paediatric traumatic brain injury on grey-matter volume of three large-scale, domain-general brain networks (the Default Mode Network, DMN; the Central Executive Network, CEN; and the Salience Network, SN), as well as two domain-specific neural networks implicated in social-affective processes (the Cerebro-Cerebellar Mentalizing Network, CCMN and the Mirror Neuron/Empathy Network, MNEN). We also evaluated prospective structure-function relationships between these large-scale neural networks and cognitive, affective and conative ToM. 3D T1- weighted magnetic resonance imaging sequences were acquired sub-acutely in 137 children [TBI: n = 103; typically developing (TD) children: n = 34]. All children were assessed on measures of ToM at 24-months post-injury. Children with severe TBI showed sub-acute volumetric reductions in the CCMN, SN, MNEN, CEN and DMN, as well as reduced grey-matter volumes of several hub regions of these neural networks. Volumetric reductions in the CCMN and several of its hub regions, including the cerebellum, predicted poorer cognitive ToM. In contrast, poorer affective and conative ToM were predicted by volumetric reductions in the SN and MNEN, respectively. Overall, results suggest that cognitive, affective and conative ToM may be prospectively predicted by individual differences in structure of different neural systems-the CCMN, SN and MNEN, respectively. The prospective relationship between cerebellar volume and cognitive ToM outcomes is a novel finding in our paediatric brain injury sample and suggests that the cerebellum may play a role in the neural networks important for ToM. These findings are discussed in relation to neurocognitive models of ToM. We conclude that detection of sub-acute volumetric abnormalities of large-scale neural networks and their hub regions may aid in the early identification of children at risk for chronic social-cognitive impairment. © The Author (2017). Published by Oxford University Press.

  17. Extrinsic and Intrinsic Brain Network Connectivity Maintains Cognition across the Lifespan Despite Accelerated Decay of Regional Brain Activation.

    PubMed

    Tsvetanov, Kamen A; Henson, Richard N A; Tyler, Lorraine K; Razi, Adeel; Geerligs, Linda; Ham, Timothy E; Rowe, James B

    2016-03-16

    The maintenance of wellbeing across the lifespan depends on the preservation of cognitive function. We propose that successful cognitive aging is determined by interactions both within and between large-scale functional brain networks. Such connectivity can be estimated from task-free functional magnetic resonance imaging (fMRI), also known as resting-state fMRI (rs-fMRI). However, common correlational methods are confounded by age-related changes in the neurovascular signaling. To estimate network interactions at the neuronal rather than vascular level, we used generative models that specified both the neural interactions and a flexible neurovascular forward model. The networks' parameters were optimized to explain the spectral dynamics of rs-fMRI data in 602 healthy human adults from population-based cohorts who were approximately uniformly distributed between 18 and 88 years (www.cam-can.com). We assessed directed connectivity within and between three key large-scale networks: the salience network, dorsal attention network, and default mode network. We found that age influences connectivity both within and between these networks, over and above the effects on neurovascular coupling. Canonical correlation analysis revealed that the relationship between network connectivity and cognitive function was age-dependent: cognitive performance relied on neural dynamics more strongly in older adults. These effects were driven partly by reduced stability of neural activity within all networks, as expressed by an accelerated decay of neural information. Our findings suggest that the balance of excitatory connectivity between networks, and the stability of intrinsic neural representations within networks, changes with age. The cognitive function of older adults becomes increasingly dependent on these factors. Maintaining cognitive function is critical to successful aging. To study the neural basis of cognitive function across the lifespan, we studied a large population-based cohort (n = 602, 18-88 years), separating neural connectivity from vascular components of fMRI signals. Cognitive ability was influenced by the strength of connection within and between functional brain networks, and this positive relationship increased with age. In older adults, there was more rapid decay of intrinsic neuronal activity in multiple regions of the brain networks, which related to cognitive performance. Our data demonstrate increased reliance on network flexibility to maintain cognitive function, in the presence of more rapid decay of neural activity. These insights will facilitate the development of new strategies to maintain cognitive ability. Copyright © 2016 Tsvetanov et al.

  18. Extrinsic and Intrinsic Brain Network Connectivity Maintains Cognition across the Lifespan Despite Accelerated Decay of Regional Brain Activation

    PubMed Central

    Henson, Richard N.A.; Tyler, Lorraine K.; Razi, Adeel; Geerligs, Linda; Ham, Timothy E.; Rowe, James B.

    2016-01-01

    The maintenance of wellbeing across the lifespan depends on the preservation of cognitive function. We propose that successful cognitive aging is determined by interactions both within and between large-scale functional brain networks. Such connectivity can be estimated from task-free functional magnetic resonance imaging (fMRI), also known as resting-state fMRI (rs-fMRI). However, common correlational methods are confounded by age-related changes in the neurovascular signaling. To estimate network interactions at the neuronal rather than vascular level, we used generative models that specified both the neural interactions and a flexible neurovascular forward model. The networks' parameters were optimized to explain the spectral dynamics of rs-fMRI data in 602 healthy human adults from population-based cohorts who were approximately uniformly distributed between 18 and 88 years (www.cam-can.com). We assessed directed connectivity within and between three key large-scale networks: the salience network, dorsal attention network, and default mode network. We found that age influences connectivity both within and between these networks, over and above the effects on neurovascular coupling. Canonical correlation analysis revealed that the relationship between network connectivity and cognitive function was age-dependent: cognitive performance relied on neural dynamics more strongly in older adults. These effects were driven partly by reduced stability of neural activity within all networks, as expressed by an accelerated decay of neural information. Our findings suggest that the balance of excitatory connectivity between networks, and the stability of intrinsic neural representations within networks, changes with age. The cognitive function of older adults becomes increasingly dependent on these factors. SIGNIFICANCE STATEMENT Maintaining cognitive function is critical to successful aging. To study the neural basis of cognitive function across the lifespan, we studied a large population-based cohort (n = 602, 18–88 years), separating neural connectivity from vascular components of fMRI signals. Cognitive ability was influenced by the strength of connection within and between functional brain networks, and this positive relationship increased with age. In older adults, there was more rapid decay of intrinsic neuronal activity in multiple regions of the brain networks, which related to cognitive performance. Our data demonstrate increased reliance on network flexibility to maintain cognitive function, in the presence of more rapid decay of neural activity. These insights will facilitate the development of new strategies to maintain cognitive ability. PMID:26985024

  19. A common neural network differentially mediates direct and social fear learning.

    PubMed

    Lindström, Björn; Haaker, Jan; Olsson, Andreas

    2018-02-15

    Across species, fears often spread between individuals through social learning. Yet, little is known about the neural and computational mechanisms underlying social learning. Addressing this question, we compared social and direct (Pavlovian) fear learning showing that they showed indistinguishable behavioral effects, and involved the same cross-modal (self/other) aversive learning network, centered on the amygdala, the anterior insula (AI), and the anterior cingulate cortex (ACC). Crucially, the information flow within this network differed between social and direct fear learning. Dynamic causal modeling combined with reinforcement learning modeling revealed that the amygdala and AI provided input to this network during direct and social learning, respectively. Furthermore, the AI gated learning signals based on surprise (associability), which were conveyed to the ACC, in both learning modalities. Our findings provide insights into the mechanisms underlying social fear learning, with implications for understanding common psychological dysfunctions, such as phobias and other anxiety disorders. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    PubMed

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  1. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding

    PubMed Central

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. PMID:28377709

  2. Neural Network Burst Pressure Prediction in Composite Overwrapped Pressure Vessels

    NASA Technical Reports Server (NTRS)

    Hill, Eric v. K.; Dion, Seth-Andrew T.; Karl, Justin O.; Spivey, Nicholas S.; Walker, James L., II

    2007-01-01

    Acoustic emission data were collected during the hydroburst testing of eleven 15 inch diameter filament wound composite overwrapped pressure vessels. A neural network burst pressure prediction was generated from the resulting AE amplitude data. The bottles shared commonality of graphite fiber, epoxy resin, and cure time. Individual bottles varied by cure mode (rotisserie versus static oven curing), types of inflicted damage, temperature of the pressurant, and pressurization scheme. Three categorical variables were selected to represent undamaged bottles, impact damaged bottles, and bottles with lacerated hoop fibers. This categorization along with the removal of the AE data from the disbonding noise between the aluminum liner and the composite overwrap allowed the prediction of burst pressures in all three sets of bottles using a single backpropagation neural network. Here the worst case error was 3.38 percent.

  3. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network.

    PubMed

    Yoon, Jaehong; Lee, Jungnyun; Whang, Mincheol

    2018-01-01

    Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain-computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  4. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network

    PubMed Central

    2018-01-01

    Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain–computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  5. Deep convolutional neural network for the classification of hepatocellular carcinoma and intrahepatic cholangiocarcinoma

    NASA Astrophysics Data System (ADS)

    Midya, Abhishek; Chakraborty, Jayasree; Pak, Linda M.; Zheng, Jian; Jarnagin, William R.; Do, Richard K. G.; Simpson, Amber L.

    2018-02-01

    Liver cancer is the second leading cause of cancer-related death worldwide.1 Hepatocellular carcinoma (HCC) is the most common primary liver cancer accounting for approximately 80% of cases. Intrahepatic cholangiocarcinoma (ICC) is a rare liver cancer, arising in patients with the same risk factors as HCC, but treatment options and prognosis differ. The diagnosis of HCC is based primarily on imaging but distinguishing between HCC and ICC is challenging due to common radiographic features.2-4 The aim of the present study is to classify HCC and ICC in portal venous phase CT. 107 patients with resected ICC and 116 patients with resected HCC were included in our analysis. We developed a deep neural network by modifying a pre-trained Inception network by retraining the final layers. The proposed method achieved the best accuracy and area under the receiver operating characteristics curve of 69.70% and 0.72, respectively on the test data.

  6. Acral melanoma detection using a convolutional neural network for dermoscopy images.

    PubMed

    Yu, Chanki; Yang, Sejung; Kim, Wonoh; Jung, Jinwoong; Chung, Kee-Yang; Lee, Sang Wook; Oh, Byungho

    2018-01-01

    Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist's and non-expert's evaluation. The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert's evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden's index like 0.6795, 0.6073, which were similar score with the expert. Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet.

  7. Identifying the origin of waterbird carcasses in Lake Michigan using a neural network source tracking model

    USGS Publications Warehouse

    Kenow, Kevin P.; Ge, Zhongfu; Fara, Luke J.; Houdek, Steven C.; Lubinski, Brian R.

    2016-01-01

    Avian botulism type E is responsible for extensive waterbird mortality on the Great Lakes, yet the actual site of toxin exposure remains unclear. Beached carcasses are often used to describe the spatial aspects of botulism mortality outbreaks, but lack specificity of offshore toxin source locations. We detail methodology for developing a neural network model used for predicting waterbird carcass motions in response to wind, wave, and current forcing, in lieu of a complex analytical relationship. This empirically trained model uses current velocity, wind velocity, significant wave height, and wave peak period in Lake Michigan simulated by the Great Lakes Coastal Forecasting System. A detailed procedure is further developed to use the model for back-tracing waterbird carcasses found on beaches in various parts of Lake Michigan, which was validated using drift data for radiomarked common loon (Gavia immer) carcasses deployed at a variety of locations in northern Lake Michigan during September and October of 2013. The back-tracing model was further used on 22 non-radiomarked common loon carcasses found along the shoreline of northern Lake Michigan in October and November of 2012. The model-estimated origins of those cases pointed to some common source locations offshore that coincide with concentrations of common loons observed during aerial surveys. The neural network source tracking model provides a promising approach for identifying locations of botulinum neurotoxin type E intoxication and, in turn, contributes to developing an understanding of the dynamics of toxin production and possible trophic transfer pathways.

  8. Power plant fault detection using artificial neural network

    NASA Astrophysics Data System (ADS)

    Thanakodi, Suresh; Nazar, Nazatul Shiema Moh; Joini, Nur Fazriana; Hidzir, Hidzrin Dayana Mohd; Awira, Mohammad Zulfikar Khairul

    2018-02-01

    The fault that commonly occurs in power plants is due to various factors that affect the system outage. There are many types of faults in power plants such as single line to ground fault, double line to ground fault, and line to line fault. The primary aim of this paper is to diagnose the fault in 14 buses power plants by using an Artificial Neural Network (ANN). The Multilayered Perceptron Network (MLP) that detection trained utilized the offline training methods such as Gradient Descent Backpropagation (GDBP), Levenberg-Marquardt (LM), and Bayesian Regularization (BR). The best method is used to build the Graphical User Interface (GUI). The modelling of 14 buses power plant, network training, and GUI used the MATLAB software.

  9. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  10. Mdm2 mediates FMRP- and Gp1 mGluR-dependent protein translation and neural network activity.

    PubMed

    Liu, Dai-Chi; Seimetz, Joseph; Lee, Kwan Young; Kalsotra, Auinash; Chung, Hee Jung; Lu, Hua; Tsai, Nien-Pei

    2017-10-15

    Activating Group 1 (Gp1) metabotropic glutamate receptors (mGluRs), including mGluR1 and mGluR5, elicits translation-dependent neural plasticity mechanisms that are crucial to animal behavior and circuit development. Dysregulated Gp1 mGluR signaling has been observed in numerous neurological and psychiatric disorders. However, the molecular pathways underlying Gp1 mGluR-dependent plasticity mechanisms are complex and have been elusive. In this study, we identified a novel mechanism through which Gp1 mGluR mediates protein translation and neural plasticity. Using a multi-electrode array (MEA) recording system, we showed that activating Gp1 mGluR elevates neural network activity, as demonstrated by increased spontaneous spike frequency and burst activity. Importantly, we validated that elevating neural network activity requires protein translation and is dependent on fragile X mental retardation protein (FMRP), the protein that is deficient in the most common inherited form of mental retardation and autism, fragile X syndrome (FXS). In an effort to determine the mechanism by which FMRP mediates protein translation and neural network activity, we demonstrated that a ubiquitin E3 ligase, murine double minute-2 (Mdm2), is required for Gp1 mGluR-induced translation and neural network activity. Our data showed that Mdm2 acts as a translation suppressor, and FMRP is required for its ubiquitination and down-regulation upon Gp1 mGluR activation. These data revealed a novel mechanism by which Gp1 mGluR and FMRP mediate protein translation and neural network activity, potentially through de-repressing Mdm2. Our results also introduce an alternative way for understanding altered protein translation and brain circuit excitability associated with Gp1 mGluR in neurological diseases such as FXS. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Functional connectivity of hippocampal and prefrontal networks during episodic and spatial memory based on real-world environments.

    PubMed

    Robin, Jessica; Hirshhorn, Marnie; Rosenbaum, R Shayna; Winocur, Gordon; Moscovitch, Morris; Grady, Cheryl L

    2015-01-01

    Several recent studies have compared episodic and spatial memory in neuroimaging paradigms in order to understand better the contribution of the hippocampus to each of these tasks. In the present study, we build on previous findings showing common neural activation in default network areas during episodic and spatial memory tasks based on familiar, real-world environments (Hirshhorn et al. (2012) Neuropsychologia 50:3094-3106). Following previous demonstrations of the presence of functionally connected sub-networks within the default network, we performed seed-based functional connectivity analyses to determine how, depending on the task, the hippocampus and prefrontal cortex differentially couple with one another and with distinct whole-brain networks. We found evidence for a medial prefrontal-parietal network and a medial temporal lobe network, which were functionally connected to the prefrontal and hippocampal seeds, respectively, regardless of the nature of the memory task. However, these two networks were functionally connected with one another during the episodic memory task, but not during spatial memory tasks. Replicating previous reports of fractionation of the default network into stable sub-networks, this study also shows how these sub-networks may flexibly couple and uncouple with one another based on task demands. These findings support the hypothesis that episodic memory and spatial memory share a common medial temporal lobe-based neural substrate, with episodic memory recruiting additional prefrontal sub-networks. © 2014 Wiley Periodicals, Inc.

  12. Correlational Neural Networks.

    PubMed

    Chandar, Sarath; Khapra, Mitesh M; Larochelle, Hugo; Ravindran, Balaraman

    2016-02-01

    Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches.

  13. Similar patterns of neural activity predict memory function during encoding and retrieval.

    PubMed

    Kragel, James E; Ezzyat, Youssef; Sperling, Michael R; Gorniak, Richard; Worrell, Gregory A; Berry, Brent M; Inman, Cory; Lin, Jui-Jui; Davis, Kathryn A; Das, Sandhitsu R; Stein, Joel M; Jobst, Barbara C; Zaghloul, Kareem A; Sheth, Sameer A; Rizzuto, Daniel S; Kahana, Michael J

    2017-07-15

    Neural networks that span the medial temporal lobe (MTL), prefrontal cortex, and posterior cortical regions are essential to episodic memory function in humans. Encoding and retrieval are supported by the engagement of both distinct neural pathways across the cortex and common structures within the medial temporal lobes. However, the degree to which memory performance can be determined by neural processing that is common to encoding and retrieval remains to be determined. To identify neural signatures of successful memory function, we administered a delayed free-recall task to 187 neurosurgical patients implanted with subdural or intraparenchymal depth electrodes. We developed multivariate classifiers to identify patterns of spectral power across the brain that independently predicted successful episodic encoding and retrieval. During encoding and retrieval, patterns of increased high frequency activity in prefrontal, MTL, and inferior parietal cortices, accompanied by widespread decreases in low frequency power across the brain predicted successful memory function. Using a cross-decoding approach, we demonstrate the ability to predict memory function across distinct phases of the free-recall task. Furthermore, we demonstrate that classifiers that combine information from both encoding and retrieval states can outperform task-independent models. These findings suggest that the engagement of a core memory network during either encoding or retrieval shapes the ability to remember the past, despite distinct neural interactions that facilitate encoding and retrieval. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Approach to design neural cryptography: a generalized architecture and a heuristic rule.

    PubMed

    Mu, Nankun; Liao, Xiaofeng; Huang, Tingwen

    2013-06-01

    Neural cryptography, a type of public key exchange protocol, is widely considered as an effective method for sharing a common secret key between two neural networks on public channels. How to design neural cryptography remains a great challenge. In this paper, in order to provide an approach to solve this challenge, a generalized network architecture and a significant heuristic rule are designed. The proposed generic framework is named as tree state classification machine (TSCM), which extends and unifies the existing structures, i.e., tree parity machine (TPM) and tree committee machine (TCM). Furthermore, we carefully study and find that the heuristic rule can improve the security of TSCM-based neural cryptography. Therefore, TSCM and the heuristic rule can guide us to designing a great deal of effective neural cryptography candidates, in which it is possible to achieve the more secure instances. Significantly, in the light of TSCM and the heuristic rule, we further expound that our designed neural cryptography outperforms TPM (the most secure model at present) on security. Finally, a series of numerical simulation experiments are provided to verify validity and applicability of our results.

  15. Story-Telling and Narrative: A Neurophilosophical Perspective.

    ERIC Educational Resources Information Center

    Liston, Delores D.

    Theories of neuroscience are presented to demonstrate the significance of storytelling and narrative to education by relating brain function to learning. A few key concepts are reviewed to establish a common working vocabulary with regard to neural networks. The tensor network theory and the neurognosis theory are described to provide…

  16. The influence of lifestyle on cardiovascular risk factors. Analysis using a neural network.

    PubMed

    Gueli, Nicoló; Piccirillo, Gianfanco; Troisi, Giovanni; Cicconetti, Paolo; Meloni, Fortunato; Ettorre, Evaristo; Verico, Paola; D'Arcangelo, Enzo; Cacciafesta, Mauro

    2005-01-01

    The cardiovascular pathologies are the most common causes of death in the elderly patient. To single out the main risk factors in order to effectively prevent the onset of the disease, the authors experimented a special computerized tool, the neural network, that works out a mathematical relation that can obtain certain data (defined as output) as a function of other data (defined as input). Data were processed from a sample of 276 subjects of both sexes aged 26-69 years old. The output data were: high/low cholesterolemia, HDL cholesterol, triglyceridemia with respect to an established cut-off; the input data were: sex, age, build, weight, married/single, number of children, number of cigarettes smoked/day, amount of wine and number of cups of coffee. We conclude that: (i) a relationship exists, deduced from a neural network, between a set of input variables and a dichotomous output variable; (ii) this relationship can be expressed as a mathematical function; (iii) a neural network, having learned the data on a sufficiently large population, can provide valid predictive data for a single individual with a high probability (up to 93.33%) that the response it gives is correct. In this study, such a result is found for two of the three cardiovascular risk indicators considered (cholesterol and triglycerides); (iv) the repetition of the neural network analysis of the cases in question after a "pruning" operation provided a somewhat less good performance; (v) a statistical analysis conducted on those same cases has confirmed the existence of a strong relationship between the input and the output variables. Therefore the neural network is a valid instrument for providing predictive in a single subject on cardiovascular pathology risks.

  17. Adaptive Output-Feedback Neural Control of Switched Uncertain Nonlinear Systems With Average Dwell Time.

    PubMed

    Long, Lijun; Zhao, Jun

    2015-07-01

    This paper investigates the problem of adaptive neural tracking control via output-feedback for a class of switched uncertain nonlinear systems without the measurements of the system states. The unknown control signals are approximated directly by neural networks. A novel adaptive neural control technique for the problem studied is set up by exploiting the average dwell time method and backstepping. A switched filter and different update laws are designed to reduce the conservativeness caused by adoption of a common observer and a common update law for all subsystems. The proposed controllers of subsystems guarantee that all closed-loop signals remain bounded under a class of switching signals with average dwell time, while the output tracking error converges to a small neighborhood of the origin. As an application of the proposed design method, adaptive output feedback neural tracking controllers for a mass-spring-damper system are constructed.

  18. Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals.

    PubMed

    Acharya, U Rajendra; Oh, Shu Lih; Hagiwara, Yuki; Tan, Jen Hong; Adeli, Hojjat

    2017-09-27

    An encephalogram (EEG) is a commonly used ancillary test to aide in the diagnosis of epilepsy. The EEG signal contains information about the electrical activity of the brain. Traditionally, neurologists employ direct visual inspection to identify epileptiform abnormalities. This technique can be time-consuming, limited by technical artifact, provides variable results secondary to reader expertise level, and is limited in identifying abnormalities. Therefore, it is essential to develop a computer-aided diagnosis (CAD) system to automatically distinguish the class of these EEG signals using machine learning techniques. This is the first study to employ the convolutional neural network (CNN) for analysis of EEG signals. In this work, a 13-layer deep convolutional neural network (CNN) algorithm is implemented to detect normal, preictal, and seizure classes. The proposed technique achieved an accuracy, specificity, and sensitivity of 88.67%, 90.00% and 95.00%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Dynamic gesture recognition using neural networks: a fundament for advanced interaction construction

    NASA Astrophysics Data System (ADS)

    Boehm, Klaus; Broll, Wolfgang; Sokolewicz, Michael A.

    1994-04-01

    Interaction in virtual reality environments is still a challenging task. Static hand posture recognition is currently the most common and widely used method for interaction using glove input devices. In order to improve the naturalness of interaction, and thereby decrease the user-interface learning time, there is a need to be able to recognize dynamic gestures. In this paper we describe our approach to overcoming the difficulties of dynamic gesture recognition (DGR) using neural networks. Backpropagation neural networks have already proven themselves to be appropriate and efficient for posture recognition. However, the extensive amount of data involved in DGR requires a different approach. Because of features such as topology preservation and automatic-learning, Kohonen Feature Maps are particularly suitable for the reduction of the high dimensional data space that is the result of a dynamic gesture, and are thus implemented for this task.

  20. Fuzzy logic and neural networks in artificial intelligence and pattern recognition

    NASA Astrophysics Data System (ADS)

    Sanchez, Elie

    1991-10-01

    With the use of fuzzy logic techniques, neural computing can be integrated in symbolic reasoning to solve complex real world problems. In fact, artificial neural networks, expert systems, and fuzzy logic systems, in the context of approximate reasoning, share common features and techniques. A model of Fuzzy Connectionist Expert System is introduced, in which an artificial neural network is designed to construct the knowledge base of an expert system from, training examples (this model can also be used for specifications of rules in fuzzy logic control). Two types of weights are associated with the synaptic connections in an AND-OR structure: primary linguistic weights, interpreted as labels of fuzzy sets, and secondary numerical weights. Cell activation is computed through min-max fuzzy equations of the weights. Learning consists in finding the (numerical) weights and the network topology. This feedforward network is described and first illustrated in a biomedical application (medical diagnosis assistance from inflammatory-syndromes/proteins profiles). Then, it is shown how this methodology can be utilized for handwritten pattern recognition (characters play the role of diagnoses): in a fuzzy neuron describing a number for example, the linguistic weights represent fuzzy sets on cross-detecting lines and the numerical weights reflect the importance (or weakness) of connections between cross-detecting lines and characters.

  1. Correcting wave predictions with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Makarynskyy, O.; Makarynska, D.

    2003-04-01

    The predictions of wind waves with different lead times are necessary in a large scope of coastal and open ocean activities. Numerical wave models, which usually provide this information, are based on deterministic equations that do not entirely account for the complexity and uncertainty of the wave generation and dissipation processes. An attempt to improve wave parameters short-term forecasts based on artificial neural networks is reported. In recent years, artificial neural networks have been used in a number of coastal engineering applications due to their ability to approximate the nonlinear mathematical behavior without a priori knowledge of interrelations among the elements within a system. The common multilayer feed-forward networks, with a nonlinear transfer functions in the hidden layers, were developed and employed to forecast the wave characteristics over one hour intervals starting from one up to 24 hours, and to correct these predictions. Three non-overlapping data sets of wave characteristics, both from a buoy, moored roughly 60 miles west of the Aran Islands, west coast of Ireland, were used to train and validate the neural nets involved. The networks were trained with error back propagation algorithm. Time series plots and scatterplots of the wave characteristics as well as tables with statistics show an improvement of the results achieved due to the correction procedure employed.

  2. Calculation of Crystallographic Texture of BCC Steels During Cold Rolling

    NASA Astrophysics Data System (ADS)

    Das, Arpan

    2017-05-01

    BCC alloys commonly tend to develop strong fibre textures and often represent as isointensity diagrams in φ 1 sections or by fibre diagrams. Alpha fibre in bcc steels is generally characterised by <110> crystallographic axis parallel to the rolling direction. The objective of present research is to correlate carbon content, carbide dispersion, rolling reduction, Euler angles (ϕ) (when φ 1 = 0° and φ 2 = 45° along alpha fibre) and the resulting alpha fibre texture orientation intensity. In the present research, Bayesian neural computation has been employed to correlate these and compare with the existing feed-forward neural network model comprehensively. Excellent match to the measured texture data within the bounding box of texture training data set has been already predicted through the feed-forward neural network model by other researchers. Feed-forward neural network prediction outside the bounds of training texture data showed deviations from the expected values. Currently, Bayesian computation has been similarly applied to confirm that the predictions are reasonable in the context of basic metallurgical principles, and matched better outside the bounds of training texture data set than the reported feed-forward neural network. Bayesian computation puts error bars on predicted values and allows significance of each individual parameters to be estimated. Additionally, it is also possible by Bayesian computation to estimate the isolated influence of particular variable such as carbon concentration, which exactly cannot in practice be varied independently. This shows the ability of the Bayesian neural network to examine the new phenomenon in situations where the data cannot be accessed through experiments.

  3. Adaptive enhanced sampling by force-biasing using neural networks

    NASA Astrophysics Data System (ADS)

    Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.

    2018-04-01

    A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.

  4. The TensorMol-0.1 model chemistry: a neural network augmented with long-range physics.

    PubMed

    Yao, Kun; Herr, John E; Toth, David W; Mckintyre, Ryker; Parkhill, John

    2018-02-28

    Traditional force fields cannot model chemical reactivity, and suffer from low generality without re-fitting. Neural network potentials promise to address these problems, offering energies and forces with near ab initio accuracy at low cost. However a data-driven approach is naturally inefficient for long-range interatomic forces that have simple physical formulas. In this manuscript we construct a hybrid model chemistry consisting of a nearsighted neural network potential with screened long-range electrostatic and van der Waals physics. This trained potential, simply dubbed "TensorMol-0.1", is offered in an open-source Python package capable of many of the simulation types commonly used to study chemistry: geometry optimizations, harmonic spectra, open or periodic molecular dynamics, Monte Carlo, and nudged elastic band calculations. We describe the robustness and speed of the package, demonstrating its millihartree accuracy and scalability to tens-of-thousands of atoms on ordinary laptops. We demonstrate the performance of the model by reproducing vibrational spectra, and simulating the molecular dynamics of a protein. Our comparisons with electronic structure theory and experimental data demonstrate that neural network molecular dynamics is poised to become an important tool for molecular simulation, lowering the resource barrier to simulating chemistry.

  5. Discriminant analysis of fused positive and negative ion mobility spectra using multivariate self-modeling mixture analysis and neural networks.

    PubMed

    Chen, Ping; Harrington, Peter B

    2008-02-01

    A new method coupling multivariate self-modeling mixture analysis and pattern recognition has been developed to identify toxic industrial chemicals using fused positive and negative ion mobility spectra (dual scan spectra). A Smiths lightweight chemical detector (LCD), which can measure positive and negative ion mobility spectra simultaneously, was used to acquire the data. Simple-to-use interactive self-modeling mixture analysis (SIMPLISMA) was used to separate the analytical peaks in the ion mobility spectra from the background reactant ion peaks (RIP). The SIMPLSIMA analytical components of the positive and negative ion peaks were combined together in a butterfly representation (i.e., negative spectra are reported with negative drift times and reflected with respect to the ordinate and juxtaposed with the positive ion mobility spectra). Temperature constrained cascade-correlation neural network (TCCCN) models were built to classify the toxic industrial chemicals. Seven common toxic industrial chemicals were used in this project to evaluate the performance of the algorithm. Ten bootstrapped Latin partitions demonstrated that the classification of neural networks using the SIMPLISMA components was statistically better than neural network models trained with fused ion mobility spectra (IMS).

  6. A fresh look at functional link neural network for motor imagery-based brain-computer interface.

    PubMed

    Hettiarachchi, Imali T; Babaei, Toktam; Nguyen, Thanh; Lim, Chee P; Nahavandi, Saeid

    2018-05-04

    Artificial neural networks (ANNs) are one of the widely used classifiers in the brain-computer interface (BCI) systems-based on noninvasive electroencephalography (EEG) signals. Among the different ANN architectures, the most commonly applied for BCI classifiers is the multilayer perceptron (MLP). When appropriately designed with optimal number of neuron layers and number of neurons per layer, the ANN can act as a universal approximator. However, due to the low signal-to-noise ratio of EEG signal data, overtraining problem may become an inherent issue, causing these universal approximators to fail in real-time applications. In this study we introduce a higher order neural network, namely the functional link neural network (FLNN) as a classifier for motor imagery (MI)-based BCI systems, to remedy the drawbacks in MLP. We compare the proposed method with competing classifiers such as linear decomposition analysis, naïve Bayes, k-nearest neighbours, support vector machine and three MLP architectures. Two multi-class benchmark datasets from the BCI competitions are used. Common spatial pattern algorithm is utilized for feature extraction to build classification models. FLNN reports the highest average Kappa value over multiple subjects for both the BCI competition datasets, under similarly preprocessed data and extracted features. Further, statistical comparison results over multiple subjects show that the proposed FLNN classification method yields the best performance among the competing classifiers. Findings from this study imply that the proposed method, which has less computational complexity compared to the MLP, can be implemented effectively in practical MI-based BCI systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Role of Network Science in the Study of Anesthetic State Transitions.

    PubMed

    Lee, UnCheol; Mashour, George A

    2018-04-23

    The heterogeneity of molecular mechanisms, target neural circuits, and neurophysiologic effects of general anesthetics makes it difficult to develop a reliable and drug-invariant index of general anesthesia. No single brain region or mechanism has been identified as the neural correlate of consciousness, suggesting that consciousness might emerge through complex interactions of spatially and temporally distributed brain functions. The goal of this review article is to introduce the basic concepts of networks and explain why the application of network science to general anesthesia could be a pathway to discover a fundamental mechanism of anesthetic-induced unconsciousness. This article reviews data suggesting that reduced network efficiency, constrained network repertoires, and changes in cortical dynamics create inhospitable conditions for information processing and transfer, which lead to unconsciousness. This review proposes that network science is not just a useful tool but a necessary theoretical framework and method to uncover common principles of anesthetic-induced unconsciousness.

  8. Neural network-based distributed attitude coordination control for spacecraft formation flying with input saturation.

    PubMed

    Zou, An-Min; Kumar, Krishna Dev

    2012-07-01

    This brief considers the attitude coordination control problem for spacecraft formation flying when only a subset of the group members has access to the common reference attitude. A quaternion-based distributed attitude coordination control scheme is proposed with consideration of the input saturation and with the aid of the sliding-mode observer, separation principle theorem, Chebyshev neural networks, smooth projection algorithm, and robust control technique. Using graph theory and a Lyapunov-based approach, it is shown that the distributed controller can guarantee the attitude of all spacecraft to converge to a common time-varying reference attitude when the reference attitude is available only to a portion of the group of spacecraft. Numerical simulations are presented to demonstrate the performance of the proposed distributed controller.

  9. Using imagination to understand the neural basis of episodic memory

    PubMed Central

    Hassabis, Demis; Kumaran, Dharshan; Maguire, Eleanor A.

    2008-01-01

    Functional MRI (fMRI) studies investigating the neural basis of episodic memory recall, and the related task of thinking about plausible personal future events, have revealed a consistent network of associated brain regions. Surprisingly little, however, is understood about the contributions individual brain areas make to the overall recollective experience. In order to examine this, we employed a novel fMRI paradigm where subjects had to imagine fictitious experiences. In contrast to future thinking, this results in experiences that are not explicitly temporal in nature or as reliant on self-processing. By using previously imagined fictitious experiences as a comparison for episodic memories, we identified the neural basis of a key process engaged in common, namely scene construction, involving the generation, maintenance and visualisation of complex spatial contexts. This was associated with activations in a distributed network, including hippocampus, parahippocampal gyrus, and retrosplenial cortex. Importantly, we disambiguated these common effects from episodic memory-specific responses in anterior medial prefrontal cortex, posterior cingulate cortex and precuneus. These latter regions may support self-schema and familiarity processes, and contribute to the brain's ability to distinguish real from imaginary memories. We conclude that scene construction constitutes a common process underlying episodic memory and imagination of fictitious experiences, and suggest it may partially account for the similar brain networks implicated in navigation, episodic future thinking, and the default mode. We suggest that further brain regions are co-opted into this core network in a task-specific manner to support functions such as episodic memory that may have additional requirements. PMID:18160644

  10. Using imagination to understand the neural basis of episodic memory.

    PubMed

    Hassabis, Demis; Kumaran, Dharshan; Maguire, Eleanor A

    2007-12-26

    Functional MRI (fMRI) studies investigating the neural basis of episodic memory recall, and the related task of thinking about plausible personal future events, have revealed a consistent network of associated brain regions. Surprisingly little, however, is understood about the contributions individual brain areas make to the overall recollective experience. To examine this, we used a novel fMRI paradigm in which subjects had to imagine fictitious experiences. In contrast to future thinking, this results in experiences that are not explicitly temporal in nature or as reliant on self-processing. By using previously imagined fictitious experiences as a comparison for episodic memories, we identified the neural basis of a key process engaged in common, namely scene construction, involving the generation, maintenance and visualization of complex spatial contexts. This was associated with activations in a distributed network, including hippocampus, parahippocampal gyrus, and retrosplenial cortex. Importantly, we disambiguated these common effects from episodic memory-specific responses in anterior medial prefrontal cortex, posterior cingulate cortex and precuneus. These latter regions may support self-schema and familiarity processes, and contribute to the brain's ability to distinguish real from imaginary memories. We conclude that scene construction constitutes a common process underlying episodic memory and imagination of fictitious experiences, and suggest it may partially account for the similar brain networks implicated in navigation, episodic future thinking, and the default mode. We suggest that additional brain regions are co-opted into this core network in a task-specific manner to support functions such as episodic memory that may have additional requirements.

  11. An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems

    DTIC Science & Technology

    1991-12-01

    neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input

  12. A Systematic Review for Functional Neuroimaging Studies of Cognitive Reserve Across the Cognitive Aging Spectrum.

    PubMed

    Anthony, Mia; Lin, Feng

    2017-12-13

    Cognitive reserve has been proposed to explain the discrepancy between clinical symptoms and the effects of aging or Alzheimer's pathology. Functional magnetic resonance imaging (fMRI) may help elucidate how neural reserve and compensation delay cognitive decline and identify brain regions associated with cognitive reserve. This systematic review evaluated neural correlates of cognitive reserve via fMRI (resting-state and task-related) studies across the cognitive aging spectrum (i.e., normal cognition, mild cognitive impairment, and Alzheimer's disease). This review examined published articles up to March 2017. There were 13 cross-sectional observational studies that met the inclusion criteria, including relevance to cognitive reserve, subjects 60 years or older with normal cognition, mild cognitive impairment, and/or Alzheimer's disease, at least one quantitative measure of cognitive reserve, and fMRI as the imaging modality. Quality assessment of included studies was conducted using the Newcastle-Ottawa Scale adapted for cross-sectional studies. Across the cognitive aging spectrum, medial temporal regions and an anterior or posterior cingulate cortex-seeded default mode network were associated with neural reserve. Frontal regions and the dorsal attentional network were related to neural compensation. Compared to neural reserve, neural compensation was more common in mild cognitive impairment and Alzheimer's disease. Neural reserve and compensation both support cognitive reserve, with compensation more common in later stages of the cognitive aging spectrum. Longitudinal and intervention studies are needed to investigate changes between neural reserve and compensation during the transition between clinical stages, and to explore the causal relationship between cognitive reserve and potential neural substrates. © The Author(s) 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Classification of 2-dimensional array patterns: assembling many small neural networks is better than using a large one.

    PubMed

    Chen, Liang; Xue, Wei; Tokuda, Naoyuki

    2010-08-01

    In many pattern classification/recognition applications of artificial neural networks, an object to be classified is represented by a fixed sized 2-dimensional array of uniform type, which corresponds to the cells of a 2-dimensional grid of the same size. A general neural network structure, called an undistricted neural network, which takes all the elements in the array as inputs could be used for problems such as these. However, a districted neural network can be used to reduce the training complexity. A districted neural network usually consists of two levels of sub-neural networks. Each of the lower level neural networks, called a regional sub-neural network, takes the elements in a region of the array as its inputs and is expected to output a temporary class label, called an individual opinion, based on the partial information of the entire array. The higher level neural network, called an assembling sub-neural network, uses the outputs (opinions) of regional sub-neural networks as inputs, and by consensus derives the label decision for the object. Each of the sub-neural networks can be trained separately and thus the training is less expensive. The regional sub-neural networks can be trained and performed in parallel and independently, therefore a high speed can be achieved. We prove theoretically in this paper, using a simple model, that a districted neural network is actually more stable than an undistricted neural network in noisy environments. We conjecture that the result is valid for all neural networks. This theory is verified by experiments involving gender classification and human face recognition. We conclude that a districted neural network is highly recommended for neural network applications in recognition or classification of 2-dimensional array patterns in highly noisy environments. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  14. Applying cybernetic technology to diagnose human pulmonary sounds.

    PubMed

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  15. Prediction of daily sea surface temperature using efficient neural networks

    NASA Astrophysics Data System (ADS)

    Patil, Kalpesh; Deo, Makaranad Chintamani

    2017-04-01

    Short-term prediction of sea surface temperature (SST) is commonly achieved through numerical models. Numerical approaches are more suitable for use over a large spatial domain than in a specific site because of the difficulties involved in resolving various physical sub-processes at local levels. Therefore, for a given location, a data-driven approach such as neural networks may provide a better alternative. The application of neural networks, however, needs a large experimentation in their architecture, training methods, and formation of appropriate input-output pairs. A network trained in this manner can provide more attractive results if the advances in network architecture are additionally considered. With this in mind, we propose the use of wavelet neural networks (WNNs) for prediction of daily SST values. The prediction of daily SST values was carried out using WNN over 5 days into the future at six different locations in the Indian Ocean. First, the accuracy of site-specific SST values predicted by a numerical model, ROMS, was assessed against the in situ records. The result pointed out the necessity for alternative approaches. First, traditional networks were tried and after noticing their poor performance, WNN was used. This approach produced attractive forecasts when judged through various error statistics. When all locations were viewed together, the mean absolute error was within 0.18 to 0.32 °C for a 5-day-ahead forecast. The WNN approach was thus found to add value to the numerical method of SST prediction when location-specific information is desired.

  16. Predictive Coding of Dynamical Variables in Balanced Spiking Networks

    PubMed Central

    Boerlin, Martin; Machens, Christian K.; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated. PMID:24244113

  17. Advanced Aeroservoelastic Testing and Data Analysis (Les Essais Aeroservoelastiques et l’Analyse des Donnees).

    DTIC Science & Technology

    1995-11-01

    network - based AFS concepts. Neural networks can addition of vanes in each engine exhaust for thrust provide...parameter estimation programs 19-11 8.6 Neural Network Based Methods unknown parameters of the postulated state space model Artificial neural network ...Forward Neural Network the network that the applicability of the recurrent neural and ii) Recurrent Neural Network [117-119]. network to

  18. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  19. A feed-forward Hopfield neural network algorithm (FHNNA) with a colour satellite image for water quality mapping

    NASA Astrophysics Data System (ADS)

    Asal Kzar, Ahmed; Mat Jafri, M. Z.; Hwee San, Lim; Al-Zuky, Ali A.; Mutter, Kussay N.; Hassan Al-Saleh, Anwar

    2016-06-01

    There are many techniques that have been given for water quality problem, but the remote sensing techniques have proven their success, especially when the artificial neural networks are used as mathematical models with these techniques. Hopfield neural network is one type of artificial neural networks which is common, fast, simple, and efficient, but it when it deals with images that have more than two colours such as remote sensing images. This work has attempted to solve this problem via modifying the network that deals with colour remote sensing images for water quality mapping. A Feed-forward Hopfield Neural Network Algorithm (FHNNA) was modified and used with a satellite colour image from type of Thailand earth observation system (THEOS) for TSS mapping in the Penang strait, Malaysia, through the classification of TSS concentrations. The new algorithm is based essentially on three modifications: using HNN as feed-forward network, considering the weights of bitplanes, and non-self-architecture or zero diagonal of weight matrix, in addition, it depends on a validation data. The achieved map was colour-coded for visual interpretation. The efficiency of the new algorithm has found out by the higher correlation coefficient (R=0.979) and the lower root mean square error (RMSE=4.301) between the validation data that were divided into two groups. One used for the algorithm and the other used for validating the results. The comparison was with the minimum distance classifier. Therefore, TSS mapping of polluted water in Penang strait, Malaysia, can be performed using FHNNA with remote sensing technique (THEOS). It is a new and useful application of HNN, so it is a new model with remote sensing techniques for water quality mapping which is considered important environmental problem.

  20. Modeling Behavioral Experiment Interaction and Environmental Stimuli for a Synthetic C. elegans.

    PubMed

    Mujika, Andoni; Leškovský, Peter; Álvarez, Roberto; Otaduy, Miguel A; Epelde, Gorka

    2017-01-01

    This paper focusses on the simulation of the neural network of the Caenorhabditis elegans living organism, and more specifically in the modeling of the stimuli applied within behavioral experiments and the stimuli that is generated in the interaction of the C. elegans with the environment. To the best of our knowledge, all efforts regarding stimuli modeling for the C. elegans are focused on a single type of stimulus, which is usually tested with a limited subnetwork of the C. elegans neural system. In this paper, we follow a different approach where we model a wide-range of different stimuli, with more flexible neural network configurations and simulations in mind. Moreover, we focus on the stimuli sensation by different types of sensory organs or various sensory principles of the neurons. As part of this work, most common stimuli involved in behavioral assays have been modeled. It includes models for mechanical, thermal, chemical, electrical and light stimuli, and for proprioception-related self-sensed information exchange with the neural network. The developed models have been implemented and tested with the hardware-based Si elegans simulation platform.

  1. A subset polynomial neural networks approach for breast cancer diagnosis.

    PubMed

    O'Neill, T J; Penm, Jack; Penm, Jonathan

    2007-01-01

    Breast cancer is a very common and serious cancer for women that is diagnosed in one of every eight Australian women before the age of 85. The conventional method of breast cancer diagnosis is mammography. However, mammography has been reported to have poor diagnostic capability. In this paper we have used subset polynomial neural network techniques in conjunction with fine needle aspiration cytology to undertake this difficult task of predicting breast cancer. The successful findings indicate that adoption of NNs is likely to lead to increased survival of women with breast cancer, improved electronic healthcare, and enhanced quality of life.

  2. Improving Feature Representation Based on a Neural Network for Author Profiling in Social Media Texts

    PubMed Central

    2016-01-01

    We introduce a lexical resource for preprocessing social media data. We show that a neural network-based feature representation is enhanced by using this resource. We conducted experiments on the PAN 2015 and PAN 2016 author profiling corpora and obtained better results when performing the data preprocessing using the developed lexical resource. The resource includes dictionaries of slang words, contractions, abbreviations, and emoticons commonly used in social media. Each of the dictionaries was built for the English, Spanish, Dutch, and Italian languages. The resource is freely available. PMID:27795703

  3. Segmentation of retinal blood vessels using artificial neural networks for early detection of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mann, Kulwinder S.; Kaur, Sukhpreet

    2017-06-01

    There are various eye diseases in the patients suffering from the diabetes which includes Diabetic Retinopathy, Glaucoma, Hypertension etc. These all are the most common sight threatening eye diseases due to the changes in the blood vessel structure. The proposed method using supervised methods concluded that the segmentation of the retinal blood vessels can be performed accurately using neural networks training. It uses features which include Gray level features; Moment Invariant based features, Gabor filtering, Intensity feature, Vesselness feature for feature vector computation. Then the feature vector is calculated using only the prominent features.

  4. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  5. Motor Patterns in Walking.

    PubMed

    Lacquaniti, F.; Grasso, R.; Zago, M.

    1999-08-01

    Despite the fact that locomotion may differ widely in mammals, common principles of kinematic control are at work. These reflect common mechanical and neural constraints. The former are related to the need to maintain balance and to limit energy expenditure. The latter are related to the organization of the central pattern-generating networks.

  6. The probabilistic neural network architecture for high speed classification of remotely sensed imagery

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Cromp, Robert F.

    1993-01-01

    In this paper we discuss a neural network architecture (the Probabilistic Neural Net or the PNN) that, to the best of our knowledge, has not previously been applied to remotely sensed data. The PNN is a supervised non-parametric classification algorithm as opposed to the Gaussian maximum likelihood classifier (GMLC). The PNN works by fitting a Gaussian kernel to each training point. The width of the Gaussian is controlled by a tuning parameter called the window width. If very small widths are used, the method is equivalent to the nearest neighbor method. For large windows, the PNN behaves like the GMLC. The basic implementation of the PNN requires no training time at all. In this respect it is far better than the commonly used backpropagation neural network which can be shown to take O(N6) time for training where N is the dimensionality of the input vector. In addition the PNN can be implemented in a feed forward mode in hardware. The disadvantage of the PNN is that it requires all the training data to be stored. Some solutions to this problem are discussed in the paper. Finally, we discuss the accuracy of the PNN with respect to the GMLC and the backpropagation neural network (BPNN). The PNN is shown to be better than GMLC and not as good as the BPNN with regards to classification accuracy.

  7. Readout from iconic memory and selective spatial attention involve similar neural processes.

    PubMed

    Ruff, Christian C; Kristjánsson, Arni; Driver, Jon

    2007-10-01

    Iconic memory and spatial attention are often considered separately, but they may have functional similarities. Here we provide functional magnetic resonance imaging evidence for some common underlying neural effects. Subjects judged three visual stimuli in one hemifield of a bilateral array comprising six stimuli. The relevant hemifield for partial report was indicated by an auditory cue, administered either before the visual array (precue, spatial attention) or shortly after the array (postcue, iconic memory). Pre- and postcues led to similar activity modulations in lateral occipital cortex contralateral to the cued side. This finding indicates that readout from iconic memory can have some neural effects similar to those of spatial attention. We also found common bilateral activation of a fronto-parietal network for postcue and precue trials. These neuroimaging data suggest that some common neural mechanisms underlie selective spatial attention and readout from iconic memory. Some differences were also found; compared with precues, postcues led to higher activity in the right middle frontal gyrus.

  8. Readout From Iconic Memory and Selective Spatial Attention Involve Similar Neural Processes

    PubMed Central

    Ruff, Christian C; Kristjánsson, Árni; Driver, Jon

    2007-01-01

    Iconic memory and spatial attention are often considered separately, but they may have functional similarities. Here we provide functional magnetic resonance imaging evidence for some common underlying neural effects. Subjects judged three visual stimuli in one hemifield of a bilateral array comprising six stimuli. The relevant hemifield for partial report was indicated by an auditory cue, administered either before the visual array (precue, spatial attention) or shortly after the array (postcue, iconic memory). Pre- and postcues led to similar activity modulations in lateral occipital cortex contralateral to the cued side. This finding indicates that readout from iconic memory can have some neural effects similar to those of spatial attention. We also found common bilateral activation of a fronto-parietal network for postcue and precue trials. These neuroimaging data suggest that some common neural mechanisms underlie selective spatial attention and readout from iconic memory. Some differences were also found; compared with precues, postcues led to higher activity in the right middle frontal gyrus. PMID:17894608

  9. Computational Account of Spontaneous Activity as a Signature of Predictive Coding

    PubMed Central

    Koren, Veronika

    2017-01-01

    Spontaneous activity is commonly observed in a variety of cortical states. Experimental evidence suggested that neural assemblies undergo slow oscillations with Up ad Down states even when the network is isolated from the rest of the brain. Here we show that these spontaneous events can be generated by the recurrent connections within the network and understood as signatures of neural circuits that are correcting their internal representation. A noiseless spiking neural network can represent its input signals most accurately when excitatory and inhibitory currents are as strong and as tightly balanced as possible. However, in the presence of realistic neural noise and synaptic delays, this may result in prohibitively large spike counts. An optimal working regime can be found by considering terms that control firing rates in the objective function from which the network is derived and then minimizing simultaneously the coding error and the cost of neural activity. In biological terms, this is equivalent to tuning neural thresholds and after-spike hyperpolarization. In suboptimal working regimes, we observe spontaneous activity even in the absence of feed-forward inputs. In an all-to-all randomly connected network, the entire population is involved in Up states. In spatially organized networks with local connectivity, Up states spread through local connections between neurons of similar selectivity and take the form of a traveling wave. Up states are observed for a wide range of parameters and have similar statistical properties in both active and quiescent state. In the optimal working regime, Up states are vanishing, leaving place to asynchronous activity, suggesting that this working regime is a signature of maximally efficient coding. Although they result in a massive increase in the firing activity, the read-out of spontaneous Up states is in fact orthogonal to the stimulus representation, therefore interfering minimally with the network function. PMID:28114353

  10. Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language

    NASA Astrophysics Data System (ADS)

    Tanadi, Theo

    2018-03-01

    Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.

  11. Gamma oscillations in a nonlinear regime: a minimal model approach using heterogeneous integrate-and-fire networks.

    PubMed

    Bathellier, Brice; Carleton, Alan; Gerstner, Wulfram

    2008-12-01

    Fast oscillations and in particular gamma-band oscillation (20-80 Hz) are commonly observed during brain function and are at the center of several neural processing theories. In many cases, mathematical analysis of fast oscillations in neural networks has been focused on the transition between irregular and oscillatory firing viewed as an instability of the asynchronous activity. But in fact, brain slice experiments as well as detailed simulations of biological neural networks have produced a large corpus of results concerning the properties of fully developed oscillations that are far from this transition point. We propose here a mathematical approach to deal with nonlinear oscillations in a network of heterogeneous or noisy integrate-and-fire neurons connected by strong inhibition. This approach involves limited mathematical complexity and gives a good sense of the oscillation mechanism, making it an interesting tool to understand fast rhythmic activity in simulated or biological neural networks. A surprising result of our approach is that under some conditions, a change of the strength of inhibition only weakly influences the period of the oscillation. This is in contrast to standard theoretical and experimental models of interneuron network gamma oscillations (ING), where frequency tightly depends on inhibition strength, but it is similar to observations made in some in vitro preparations in the hippocampus and the olfactory bulb and in some detailed network models. This result is explained by the phenomenon of suppression that is known to occur in strongly coupled oscillating inhibitory networks but had not yet been related to the behavior of oscillation frequency.

  12. Permutation parity machines for neural cryptography.

    PubMed

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-06-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  13. Permutation parity machines for neural cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyes, Oscar Mauricio; Escuela de Ingenieria Electrica, Electronica y Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga; Zimmermann, Karl-Heinz

    2010-06-15

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  14. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.

    PubMed

    Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii

    2017-01-01

    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  15. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

    PubMed Central

    Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii

    2017-01-01

    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications. PMID:29375284

  16. Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches

    PubMed Central

    Lowet, Eric; Roberts, Mark J.; Bonizzi, Pietro; Karel, Joël; De Weerd, Peter

    2016-01-01

    Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent) synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE) of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV) method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT) preceded by Singular Spectrum Decomposition (SSD) of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization-mediated information flow among networks. PMID:26745498

  17. A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy.

    PubMed

    Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H

    2018-05-02

    A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; ...

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  19. Optical track width measurements below 100 nm using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Smith, R. J.; See, C. W.; Somekh, M. G.; Yacoot, A.; Choi, E.

    2005-12-01

    This paper discusses the feasibility of using artificial neural networks (ANNs), together with a high precision scanning optical profiler, to measure very fine track widths that are considerably below the conventional diffraction limit of a conventional optical microscope. The ANN is trained using optical profiles obtained from tracks of known widths, the network is then assessed by applying it to test profiles. The optical profiler is an ultra-stable common path scanning interferometer, which provides extremely precise surface measurements. Preliminary results, obtained with a 0.3 NA objective lens and a laser wavelength of 633 nm, show that the system is capable of measuring a 50 nm track width, with a standard deviation less than 4 nm.

  20. A convolutional neural network neutrino event classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurisano, A.; Radovic, A.; Rocco, D.

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  1. Classification of mineral deposits into types using mineralogy with a probabilistic neural network

    USGS Publications Warehouse

    Singer, Donald A.; Kouda, Ryoichi

    1997-01-01

    In order to determine whether it is desirable to quantify mineral-deposit models further, a test of the ability of a probabilistic neural network to classify deposits into types based on mineralogy was conducted. Presence or absence of ore and alteration mineralogy in well-typed deposits were used to train the network. To reduce the number of minerals considered, the analyzed data were restricted to minerals present in at least 20% of at least one deposit type. An advantage of this restriction is that single or rare occurrences of minerals did not dominate the results. Probabilistic neural networks can provide mathematically sound confidence measures based on Bayes theorem and are relatively insensitive to outliers. Founded on Parzen density estimation, they require no assumptions about distributions of random variables used for classification, even handling multimodal distributions. They train quickly and work as well as, or better than, multiple-layer feedforward networks. Tests were performed with a probabilistic neural network employing a Gaussian kernel and separate sigma weights for each class and each variable. The training set was reduced to the presence or absence of 58 reported minerals in eight deposit types. The training set included: 49 Cyprus massive sulfide deposits; 200 kuroko massive sulfide deposits; 59 Comstock epithermal vein gold districts; 17 quartzalunite epithermal gold deposits; 25 Creede epithermal gold deposits; 28 sedimentary-exhalative zinc-lead deposits; 28 Sado epithermal vein gold deposits; and 100 porphyry copper deposits. The most common training problem was the error of classifying about 27% of Cyprus-type deposits in the training set as kuroko. In independent tests with deposits not used in the training set, 88% of 224 kuroko massive sulfide deposits were classed correctly, 92% of 25 porphyry copper deposits, 78% of 9 Comstock epithermal gold-silver districts, and 83% of six quartzalunite epithermal gold deposits were classed correctly. Across all deposit types, 88% of deposits in the validation dataset were correctly classed. Misclassifications were most common if a deposit was characterized by only a few minerals, e.g., pyrite, chalcopyrite,and sphalerite. The success rate jumped to 98% correctly classed deposits when just two rock types were added. Such a high success rate of the probabilistic neural network suggests that not only should this preliminary test be expanded to include other deposit types, but that other deposit features should be added.

  2. Two distinct neural networks support the mapping of meaning to a novel word.

    PubMed

    Ye, Zheng; Mestres-Missé, Anna; Rodriguez-Fornells, Antoni; Münte, Thomas F

    2011-07-01

    Children can learn the meaning of a new word from context during normal reading or listening, without any explicit instruction. It is unclear how such meaning acquisition is supported and achieved in human brain. In this functional magnetic resonance imaging (fMRI) study we investigated neural networks supporting word learning with a functional connectivity approach. Participants were exposed to a new word presented in two successive sentences and needed to derive the meaning of the new word. We observed two neural networks involved in mapping the meaning to the new word. One network connected the left inferior frontal gyrus (LIFG) with the middle frontal gyrus (MFG), medial superior frontal gyrus, caudate nucleus, thalamus, and inferior parietal lobule. The other network connected the left middle temporal gyrus (LMTG) with the MFG, anterior and posterior cingulate cortex. The LIFG network showed stronger interregional interactions for new than real words, whereas the LMTG network showed similar connectivity patterns for new and real words. We proposed that these two networks support different functions during word learning. The LIFG network appears to select the most appropriate meaning from competing candidates and to map the selected meaning onto the new word. The LMTG network may be recruited to integrate the word into sentential context, regardless of whether the word is real or new. The LIFG and the LMTG networks share a common node, the MFG, suggesting that these two networks communicate in working memory. Copyright © 2010 Wiley-Liss, Inc.

  3. Is "efficiency" a useful concept in cognitive neuroscience?

    PubMed

    Poldrack, Russell A

    2015-02-01

    It is common in the cognitive neuroscience literature to explain differences in activation in terms of differences in the "efficiency" of neural function. I argue here that this usage of the concept of efficiency is empty and simply redescribes activation differences rather than providing a useful explanation of them. I examine a number of possible explanations for differential activation in terms of task performance, neuronal computation, neuronal energetics, and network organization. While the concept of "efficiency" is vacuous as it is commonly employed in the neuroimaging literature, an examination of brain development in the context of neural coding, neuroenergetics, and network structure provides a roadmap for future investigation, which is fundamental to an improved understanding of developmental effects and group differences in neuroimaging signals. Copyright © 2014 The Author. Published by Elsevier Ltd.. All rights reserved.

  4. Neuromorphic device architectures with global connectivity through electrolyte gating

    NASA Astrophysics Data System (ADS)

    Gkoupidenis, Paschalis; Koutsouras, Dimitrios A.; Malliaras, George G.

    2017-05-01

    Information processing in the brain takes place in a network of neurons that are connected with each other by an immense number of synapses. At the same time, neurons are immersed in a common electrochemical environment, and global parameters such as concentrations of various hormones regulate the overall network function. This computational paradigm of global regulation, also known as homeoplasticity, has important implications in the overall behaviour of large neural ensembles and is barely addressed in neuromorphic device architectures. Here, we demonstrate the global control of an array of organic devices based on poly(3,4ethylenedioxythiophene):poly(styrene sulf) that are immersed in an electrolyte, a behaviour that resembles homeoplasticity phenomena of the neural environment. We use this effect to produce behaviour that is reminiscent of the coupling between local activity and global oscillations in the biological neural networks. We further show that the electrolyte establishes complex connections between individual devices, and leverage these connections to implement coincidence detection. These results demonstrate that electrolyte gating offers significant advantages for the realization of networks of neuromorphic devices of higher complexity and with minimal hardwired connectivity.

  5. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  6. A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning.

    PubMed

    Kappel, David; Legenstein, Robert; Habenschuss, Stefan; Hsieh, Michael; Maass, Wolfgang

    2018-01-01

    Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.

  7. A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning

    PubMed Central

    Habenschuss, Stefan; Hsieh, Michael

    2018-01-01

    Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations. PMID:29696150

  8. A novel recurrent neural network with finite-time convergence for linear programming.

    PubMed

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  9. A recurrent neural-network-based sensor and actuator fault detection and isolation for nonlinear systems with application to the satellite's attitude control subsystem.

    PubMed

    Talebi, H A; Khorasani, K; Tafazoli, S

    2009-01-01

    This paper presents a robust fault detection and isolation (FDI) scheme for a general class of nonlinear systems using a neural-network-based observer strategy. Both actuator and sensor faults are considered. The nonlinear system considered is subject to both state and sensor uncertainties and disturbances. Two recurrent neural networks are employed to identify general unknown actuator and sensor faults, respectively. The neural network weights are updated according to a modified backpropagation scheme. Unlike many previous methods developed in the literature, our proposed FDI scheme does not rely on availability of full state measurements. The stability of the overall FDI scheme in presence of unknown sensor and actuator faults as well as plant and sensor noise and uncertainties is shown by using the Lyapunov's direct method. The stability analysis developed requires no restrictive assumptions on the system and/or the FDI algorithm. Magnetorquer-type actuators and magnetometer-type sensors that are commonly employed in the attitude control subsystem (ACS) of low-Earth orbit (LEO) satellites for attitude determination and control are considered in our case studies. The effectiveness and capabilities of our proposed fault diagnosis strategy are demonstrated and validated through extensive simulation studies.

  10. Scaling of counter-current imbibition recovery curves using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Jafari, Iman; Masihi, Mohsen; Nasiri Zarandi, Masoud

    2018-06-01

    Scaling imbibition curves are of great importance in the characterization and simulation of oil production from naturally fractured reservoirs. Different parameters such as matrix porosity and permeability, oil and water viscosities, matrix dimensions, and oil/water interfacial tensions have an effective on the imbibition process. Studies on the scaling imbibition curves along with the consideration of different assumptions have resulted in various scaling equations. In this work, using an artificial neural network (ANN) method, a novel technique is presented for scaling imbibition recovery curves, which can be used for scaling the experimental and field-scale imbibition cases. The imbibition recovery curves for training and testing the neural network were gathered through the simulation of different scenarios using a commercial reservoir simulator. In this ANN-based method, six parameters were assumed to have an effect on the imbibition process and were considered as the inputs for training the network. Using the ‘Bayesian regularization’ training algorithm, the network was trained and tested. Training and testing phases showed superior results in comparison with the other scaling methods. It is concluded that using the new technique is useful for scaling imbibition recovery curves, especially for complex cases, for which the common scaling methods are not designed.

  11. From Molecular Circuit Dysfunction to Disease: Case Studies in Epilepsy, Traumatic Brain Injury, and Alzheimer’s Disease

    PubMed Central

    Dulla, Chris G.; Coulter, Douglas A.; Ziburkus, Jokubas

    2015-01-01

    Complex circuitry with feed-forward and feed-back systems regulate neuronal activity throughout the brain. Cell biological, electrical, and neurotransmitter systems enable neural networks to process and drive the entire spectrum of cognitive, behavioral, and motor functions. Simultaneous orchestration of distinct cells and interconnected neural circuits relies on hundreds, if not thousands, of unique molecular interactions. Even single molecule dysfunctions can be disrupting to neural circuit activity, leading to neurological pathology. Here, we sample our current understanding of how molecular aberrations lead to disruptions in networks using three neurological pathologies as exemplars: epilepsy, traumatic brain injury (TBI), and Alzheimer’s disease (AD). Epilepsy provides a window into how total destabilization of network balance can occur. TBI is an abrupt physical disruption that manifests in both acute and chronic neurological deficits. Last, in AD progressive cell loss leads to devastating cognitive consequences. Interestingly, all three of these neurological diseases are interrelated. The goal of this review, therefore, is to identify molecular changes that may lead to network dysfunction, elaborate on how altered network activity and circuit structure can contribute to neurological disease, and suggest common threads that may lie at the heart of molecular circuit dysfunction. PMID:25948650

  12. From Molecular Circuit Dysfunction to Disease: Case Studies in Epilepsy, Traumatic Brain Injury, and Alzheimer's Disease.

    PubMed

    Dulla, Chris G; Coulter, Douglas A; Ziburkus, Jokubas

    2016-06-01

    Complex circuitry with feed-forward and feed-back systems regulate neuronal activity throughout the brain. Cell biological, electrical, and neurotransmitter systems enable neural networks to process and drive the entire spectrum of cognitive, behavioral, and motor functions. Simultaneous orchestration of distinct cells and interconnected neural circuits relies on hundreds, if not thousands, of unique molecular interactions. Even single molecule dysfunctions can be disrupting to neural circuit activity, leading to neurological pathology. Here, we sample our current understanding of how molecular aberrations lead to disruptions in networks using three neurological pathologies as exemplars: epilepsy, traumatic brain injury (TBI), and Alzheimer's disease (AD). Epilepsy provides a window into how total destabilization of network balance can occur. TBI is an abrupt physical disruption that manifests in both acute and chronic neurological deficits. Last, in AD progressive cell loss leads to devastating cognitive consequences. Interestingly, all three of these neurological diseases are interrelated. The goal of this review, therefore, is to identify molecular changes that may lead to network dysfunction, elaborate on how altered network activity and circuit structure can contribute to neurological disease, and suggest common threads that may lie at the heart of molecular circuit dysfunction. © The Author(s) 2015.

  13. A classifier neural network for rotordynamic systems

    NASA Astrophysics Data System (ADS)

    Ganesan, R.; Jionghua, Jin; Sankar, T. S.

    1995-07-01

    A feedforward backpropagation neural network is formed to identify the stability characteristic of a high speed rotordynamic system. The principal focus resides in accounting for the instability due to the bearing clearance effects. The abnormal operating condition of 'normal-loose' Coulomb rub, that arises in units supported by hydrodynamic bearings or rolling element bearings, is analysed in detail. The multiple-parameter stability problem is formulated and converted to a set of three-parameter algebraic inequality equations. These three parameters map the wider range of physical parameters of commonly-used rotordynamic systems into a narrow closed region, that is used in the supervised learning of the neural network. A binary-type state of the system is expressed through these inequalities that are deduced from the analytical simulation of the rotor system. Both the hidden layer as well as functional-link networks are formed and the superiority of the functional-link network is established. Considering the real time interpretation and control of the rotordynamic system, the network reliability and the learning time are used as the evaluation criteria to assess the superiority of the functional-link network. This functional-link network is further trained using the parameter values of selected rotor systems, and the classifier network is formed. The success rate of stability status identification is obtained to assess the potentials of this classifier network. The classifier network is shown that it can also be used, for control purposes, as an 'advisory' system that suggests the optimum way of parameter adjustment.

  14. An adaptive neural swarm approach for intrusion defense in ad hoc networks

    NASA Astrophysics Data System (ADS)

    Cannady, James

    2011-06-01

    Wireless sensor networks (WSN) and mobile ad hoc networks (MANET) are being increasingly deployed in critical applications due to the flexibility and extensibility of the technology. While these networks possess numerous advantages over traditional wireless systems in dynamic environments they are still vulnerable to many of the same types of host-based and distributed attacks common to those systems. Unfortunately, the limited power and bandwidth available in WSNs and MANETs, combined with the dynamic connectivity that is a defining characteristic of the technology, makes it extremely difficult to utilize traditional intrusion detection techniques. This paper describes an approach to accurately and efficiently detect potentially damaging activity in WSNs and MANETs. It enables the network as a whole to recognize attacks, anomalies, and potential vulnerabilities in a distributive manner that reflects the autonomic processes of biological systems. Each component of the network recognizes activity in its local environment and then contributes to the overall situational awareness of the entire system. The approach utilizes agent-based swarm intelligence to adaptively identify potential data sources on each node and on adjacent nodes throughout the network. The swarm agents then self-organize into modular neural networks that utilize a reinforcement learning algorithm to identify relevant behavior patterns in the data without supervision. Once the modular neural networks have established interconnectivity both locally and with neighboring nodes the analysis of events within the network can be conducted collectively in real-time. The approach has been shown to be extremely effective in identifying distributed network attacks.

  15. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  16. Resolution of Spatial and Temporal Visual Attention in Infants with Fragile X Syndrome

    ERIC Educational Resources Information Center

    Farzin, Faraz; Rivera, Susan M.; Whitney, David

    2011-01-01

    Fragile X syndrome is the most common cause of inherited intellectual impairment and the most common single-gene cause of autism. Individuals with fragile X syndrome present with a neurobehavioural phenotype that includes selective deficits in spatiotemporal visual perception associated with neural processing in frontal-parietal networks of the…

  17. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.

  18. A common functional neural network for overt production of speech and gesture.

    PubMed

    Marstaller, L; Burianová, H

    2015-01-22

    The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots

    PubMed Central

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-01-01

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases. PMID:26528986

  20. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots.

    PubMed

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-10-30

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases.

  1. Statistical process control using optimized neural networks: a case study.

    PubMed

    Addeh, Jalil; Ebrahimzadeh, Ata; Azarbad, Milad; Ranaee, Vahid

    2014-09-01

    The most common statistical process control (SPC) tools employed for monitoring process changes are control charts. A control chart demonstrates that the process has altered by generating an out-of-control signal. This study investigates the design of an accurate system for the control chart patterns (CCPs) recognition in two aspects. First, an efficient system is introduced that includes two main modules: feature extraction module and classifier module. In the feature extraction module, a proper set of shape features and statistical feature are proposed as the efficient characteristics of the patterns. In the classifier module, several neural networks, such as multilayer perceptron, probabilistic neural network and radial basis function are investigated. Based on an experimental study, the best classifier is chosen in order to recognize the CCPs. Second, a hybrid heuristic recognition system is introduced based on cuckoo optimization algorithm (COA) algorithm to improve the generalization performance of the classifier. The simulation results show that the proposed algorithm has high recognition accuracy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Evaluation of Raman spectra of human brain tumor tissue using the learning vector quantization neural network

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong

    2016-05-01

    The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.

  3. Training feed-forward neural networks with gain constraints

    PubMed

    Hartman

    2000-04-01

    Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.

  4. Linear matrix inequality approach to exponential synchronization of a class of chaotic neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Cui, Bao-Tong

    2007-07-01

    In this paper, a synchronization scheme for a class of chaotic neural networks with time-varying delays is presented. This class of chaotic neural networks covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks, and bidirectional associative memory networks. The obtained criteria are expressed in terms of linear matrix inequalities, thus they can be efficiently verified. A comparison between our results and the previous results shows that our results are less restrictive.

  5. Improving subjective pattern recognition in chemical senses through reduction of nonlinear effects in evaluation of sparse data

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Rasouli, Firooz; Wrenn, Susan E.; Subbiah, M.

    2002-11-01

    Artificial neural network models are typically useful in pattern recognition and extraction of important features in large data sets. These models are implemented in a wide variety of contexts and with diverse type of input-output data. The underlying mathematics of supervised training of neural networks is ultimately tied to the ability to approximate the nonlinearities that are inherent in network"s generalization ability. The quality and availability of sufficient data points for training and validation play a key role in the generalization ability of the network. A potential domain of applications of neural networks is in analysis of subjective data, such as in consumer science, affective neuroscience and perception of chemical senses. In applications of ANN to subjective data, it is common to rely on knowledge of the science and context for data acquisition, for instance as a priori probabilities in the Bayesian framework. In this paper, we discuss the circumstances that create challenges for success of neural network models for subjective data analysis, such as sparseness of data and cost of acquisition of additional samples. In particular, in the case of affect and perception of chemical senses, we suggest that inherent ambiguity of subjective responses could be offset by a combination of human-machine expert. We propose a method of pre- and post-processing for blind analysis of data that that relies on heuristics from human performance in interpretation of data. In particular, we offer an information-theoretic smoothing (ITS) algorithm that optimizes that geometric visualization of multi-dimensional data and improves human interpretation of the input-output view of neural network implementations. The pre- and post-processing algorithms and ITS are unsupervised. Finally, we discuss the details of an example of blind data analysis from actual taste-smell subjective data, and demonstrate the usefulness of PCA in reduction of dimensionality, as well as ITS.

  6. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  7. The neural network to determine the mechanical properties of the steels

    NASA Astrophysics Data System (ADS)

    Yemelyanov, Vitaliy; Yemelyanova, Nataliya; Safonova, Marina; Nedelkin, Aleksey

    2018-04-01

    The authors describe the neural network structure and software that is designed and developed to determine the mechanical properties of steels. The neural network is developed to refine upon the values of the steels properties. The results of simulations of the developed neural network are shown. The authors note the low standard error of the proposed neural network. To realize the proposed neural network the specialized software has been developed.

  8. Shared atypical default mode and salience network functional connectivity between autism and schizophrenia.

    PubMed

    Chen, Heng; Uddin, Lucina Q; Duan, Xujun; Zheng, Junjie; Long, Zhiliang; Zhang, Youxue; Guo, Xiaonan; Zhang, Yan; Zhao, Jingping; Chen, Huafu

    2017-11-01

    Schizophrenia and autism spectrum disorder (ASD) are two prevalent neurodevelopmental disorders sharing some similar genetic basis and clinical features. The extent to which they share common neural substrates remains unclear. Resting-state fMRI data were collected from 35 drug-naïve adolescent participants with first-episode schizophrenia (15.6 ± 1.8 years old) and 31 healthy controls (15.4 ± 1.6 years old). Data from 22 participants with ASD (13.1 ± 3.1 years old) and 21 healthy controls (12.9 ± 2.9 years old) were downloaded from the Autism Brain Imaging Data Exchange. Resting-state functional networks were constructed using predefined regions of interest. Multivariate pattern analysis combined with multi-task regression feature selection methods were conducted in two datasets separately. Classification between individuals with disorders and controls was achieved with high accuracy (schizophrenia dataset: accuracy = 83%; ASD dataset: accuracy = 80%). Shared atypical brain connections contributing to classification were mostly present in the default mode network (DMN) and salience network (SN). These functional connections were further related to severity of social deficits in ASD (p = 0.002). Distinct atypical connections were also more related to the DMN and SN, but showed different atypical connectivity patterns between the two disorders. These results suggest some common neural mechanisms contributing to schizophrenia and ASD, and may aid in understanding the pathology of these two neurodevelopmental disorders. Autism Res 2017, 10: 1776-1786. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism spectrum disorder (ASD) and schizophrenia are two common neurodevelopmental disorders which share several genetic and behavioral features. The present study identified common neural mechanisms contributing to ASD and schizophrenia using resting-state functional MRI data. The results may help to understand the pathology of these two neurodevelopmental disorders. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  9. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory

    PubMed Central

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374

  10. Decreasing-Rate Pruning Optimizes the Construction of Efficient and Robust Distributed Networks.

    PubMed

    Navlakha, Saket; Barth, Alison L; Bar-Joseph, Ziv

    2015-07-01

    Robust, efficient, and low-cost networks are advantageous in both biological and engineered systems. During neural network development in the brain, synapses are massively over-produced and then pruned-back over time. This strategy is not commonly used when designing engineered networks, since adding connections that will soon be removed is considered wasteful. Here, we show that for large distributed routing networks, network function is markedly enhanced by hyper-connectivity followed by aggressive pruning and that the global rate of pruning, a developmental parameter not previously studied by experimentalists, plays a critical role in optimizing network structure. We first used high-throughput image analysis techniques to quantify the rate of pruning in the mammalian neocortex across a broad developmental time window and found that the rate is decreasing over time. Based on these results, we analyzed a model of computational routing networks and show using both theoretical analysis and simulations that decreasing rates lead to more robust and efficient networks compared to other rates. We also present an application of this strategy to improve the distributed design of airline networks. Thus, inspiration from neural network formation suggests effective ways to design distributed networks across several domains.

  11. Decreasing-Rate Pruning Optimizes the Construction of Efficient and Robust Distributed Networks

    PubMed Central

    Navlakha, Saket; Barth, Alison L.; Bar-Joseph, Ziv

    2015-01-01

    Robust, efficient, and low-cost networks are advantageous in both biological and engineered systems. During neural network development in the brain, synapses are massively over-produced and then pruned-back over time. This strategy is not commonly used when designing engineered networks, since adding connections that will soon be removed is considered wasteful. Here, we show that for large distributed routing networks, network function is markedly enhanced by hyper-connectivity followed by aggressive pruning and that the global rate of pruning, a developmental parameter not previously studied by experimentalists, plays a critical role in optimizing network structure. We first used high-throughput image analysis techniques to quantify the rate of pruning in the mammalian neocortex across a broad developmental time window and found that the rate is decreasing over time. Based on these results, we analyzed a model of computational routing networks and show using both theoretical analysis and simulations that decreasing rates lead to more robust and efficient networks compared to other rates. We also present an application of this strategy to improve the distributed design of airline networks. Thus, inspiration from neural network formation suggests effective ways to design distributed networks across several domains. PMID:26217933

  12. On the origin of reproducible sequential activity in neural circuits

    NASA Astrophysics Data System (ADS)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  13. On the origin of reproducible sequential activity in neural circuits.

    PubMed

    Afraimovich, V S; Zhigulin, V P; Rabinovich, M I

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  14. A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints.

    PubMed

    Liang, X B; Wang, J

    2000-01-01

    This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.

  15. [Laser induced fluorescence spectrum characteristics of common edible oil and fried cooking oil].

    PubMed

    Mu, Tao-tao; Chen, Si-ying; Zhang, Yin-chao; Chen, He; Guo, Pan; Ge, Xian-ying; Gao, Li-lei

    2013-09-01

    In order to detect the trench oil the authors built a trench oil rapid detection system based on laser induced fluorescence detection technology. This system used 355 nm laser as excitation light source. The authors collected the fluorescence spectrum of a variety of edible oil and fried cooking oil (a kind of trench oil) and then set up a fluorescence spectrum database by taking advantage of the trench oil detection system It was found that the fluorescence characteristics of fried cooking oil and common edible oil were obviously different. Then it could easily realize the oil recognition and trench oil rapid detection by using principal component analysis and BP neural network, and the overall recognition rate could reach as high as 97.5%. Experiments showed that laser induced fluorescence spectrum technology was fast, non-contact, and highly sensitive. Combined with BP neural network, it would become a new technique to detect the trench oil.

  16. Random Bits Forest: a Strong Classifier/Regressor for Big Data

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Yi; Pu, Weilin; Wen, Kathryn; Shugart, Yin Yao; Xiong, Momiao; Jin, Li

    2016-07-01

    Efficiency, memory consumption, and robustness are common problems with many popular methods for data analysis. As a solution, we present Random Bits Forest (RBF), a classification and regression algorithm that integrates neural networks (for depth), boosting (for width), and random forests (for prediction accuracy). Through a gradient boosting scheme, it first generates and selects ~10,000 small, 3-layer random neural networks. These networks are then fed into a modified random forest algorithm to obtain predictions. Testing with datasets from the UCI (University of California, Irvine) Machine Learning Repository shows that RBF outperforms other popular methods in both accuracy and robustness, especially with large datasets (N > 1000). The algorithm also performed highly in testing with an independent data set, a real psoriasis genome-wide association study (GWAS).

  17. Persistent neural activity in head direction cells

    NASA Technical Reports Server (NTRS)

    Taube, Jeffrey S.; Bassett, Joshua P.; Oman, C. M. (Principal Investigator)

    2003-01-01

    Many neurons throughout the rat limbic system discharge in relation to the animal's directional heading with respect to its environment. These so-called head direction (HD) cells exhibit characteristics of persistent neural activity. This article summarizes where HD cells are found, their major properties, and some of the important experiments that have been conducted to elucidate how this signal is generated. The number of HD and angular head velocity cells was estimated for several brain areas involved in the generation of the HD signal, including the postsubiculum, anterior dorsal thalamus, lateral mammillary nuclei and dorsal tegmental nucleus. The HD cell signal has many features in common with what is known about how neural integration is accomplished in the oculomotor system. The nature of the HD cell signal makes it an attractive candidate for using neural network models to elucidate the signal's underlying mechanisms. The conditions that any network model must satisfy in order to accurately represent how the nervous system generates this signal are highlighted and areas where key information is missing are discussed.

  18. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    USGS Publications Warehouse

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  19. Self-organizing linear output map (SOLO): An artificial neural network suitable for hydrologic modeling and analysis

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Lin; Gupta, Hoshin V.; Gao, Xiaogang; Sorooshian, Soroosh; Imam, Bisher

    2002-12-01

    Artificial neural networks (ANNs) can be useful in the prediction of hydrologic variables, such as streamflow, particularly when the underlying processes have complex nonlinear interrelationships. However, conventional ANN structures suffer from network training issues that significantly limit their widespread application. This paper presents a multivariate ANN procedure entitled self-organizing linear output map (SOLO), whose structure has been designed for rapid, precise, and inexpensive estimation of network structure/parameters and system outputs. More important, SOLO provides features that facilitate insight into the underlying processes, thereby extending its usefulness beyond forecast applications as a tool for scientific investigations. These characteristics are demonstrated using a classic rainfall-runoff forecasting problem. Various aspects of model performance are evaluated in comparison with other commonly used modeling approaches, including multilayer feedforward ANNs, linear time series modeling, and conceptual rainfall-runoff modeling.

  20. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  1. Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.

    PubMed

    Wan, Ying; Cao, Jinde; Wen, Guanghui

    In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.

  2. Modified neural networks for rapid recovery of tokamak plasma parameters for real time control

    NASA Astrophysics Data System (ADS)

    Sengupta, A.; Ranjan, P.

    2002-07-01

    Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.

  3. Control of magnetic bearing systems via the Chebyshev polynomial-based unified model (CPBUM) neural network.

    PubMed

    Jeng, J T; Lee, T T

    2000-01-01

    A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  4. ChainMail based neural dynamics modeling of soft tissue deformation for surgical simulation.

    PubMed

    Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-07-20

    Realistic and real-time modeling and simulation of soft tissue deformation is a fundamental research issue in the field of surgical simulation. In this paper, a novel cellular neural network approach is presented for modeling and simulation of soft tissue deformation by combining neural dynamics of cellular neural network with ChainMail mechanism. The proposed method formulates the problem of elastic deformation into cellular neural network activities to avoid the complex computation of elasticity. The local position adjustments of ChainMail are incorporated into the cellular neural network as the local connectivity of cells, through which the dynamic behaviors of soft tissue deformation are transformed into the neural dynamics of cellular neural network. Experiments demonstrate that the proposed neural network approach is capable of modeling the soft tissues' nonlinear deformation and typical mechanical behaviors. The proposed method not only improves ChainMail's linear deformation with the nonlinear characteristics of neural dynamics but also enables the cellular neural network to follow the principle of continuum mechanics to simulate soft tissue deformation.

  5. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  6. Nested Neural Networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1992-01-01

    Report presents analysis of nested neural networks, consisting of interconnected subnetworks. Analysis based on simplified mathematical models more appropriate for artificial electronic neural networks, partly applicable to biological neural networks. Nested structure allows for retrieval of individual subpatterns. Requires fewer wires and connection devices than fully connected networks, and allows for local reconstruction of damaged subnetworks without rewiring entire network.

  7. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.

    PubMed

    Mocanu, Decebal Constantin; Mocanu, Elena; Stone, Peter; Nguyen, Phuong H; Gibescu, Madeleine; Liotta, Antonio

    2018-06-19

    Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.

  8. Quantum neural networks: Current status and prospects for development

    NASA Astrophysics Data System (ADS)

    Altaisky, M. V.; Kaputkina, N. E.; Krylov, V. A.

    2014-11-01

    The idea of quantum artificial neural networks, first formulated in [34], unites the artificial neural network concept with the quantum computation paradigm. Quantum artificial neural networks were first systematically considered in the PhD thesis by T. Menneer (1998). Based on the works of Menneer and Narayanan [42, 43], Kouda, Matsui, and Nishimura [35, 36], Altaisky [2, 68], Zhou [67], and others, quantum-inspired learning algorithms for neural networks were developed, and are now used in various training programs and computer games [29, 30]. The first practically realizable scaled hardware-implemented model of the quantum artificial neural network is obtained by D-Wave Systems, Inc. [33]. It is a quantum Hopfield network implemented on the basis of superconducting quantum interference devices (SQUIDs). In this work we analyze possibilities and underlying principles of an alternative way to implement quantum neural networks on the basis of quantum dots. A possibility of using quantum neural network algorithms in automated control systems, associative memory devices, and in modeling biological and social networks is examined.

  9. Adaptive Neural Network Algorithm for Power Control in Nuclear Power Plants

    NASA Astrophysics Data System (ADS)

    Masri Husam Fayiz, Al

    2017-01-01

    The aim of this paper is to design, test and evaluate a prototype of an adaptive neural network algorithm for the power controlling system of a nuclear power plant. The task of power control in nuclear reactors is one of the fundamental tasks in this field. Therefore, researches are constantly conducted to ameliorate the power reactor control process. Currently, in the Department of Automation in the National Research Nuclear University (NRNU) MEPhI, numerous studies are utilizing various methodologies of artificial intelligence (expert systems, neural networks, fuzzy systems and genetic algorithms) to enhance the performance, safety, efficiency and reliability of nuclear power plants. In particular, a study of an adaptive artificial intelligent power regulator in the control systems of nuclear power reactors is being undertaken to enhance performance and to minimize the output error of the Automatic Power Controller (APC) on the grounds of a multifunctional computer analyzer (simulator) of the Water-Water Energetic Reactor known as Vodo-Vodyanoi Energetichesky Reaktor (VVER) in Russian. In this paper, a block diagram of an adaptive reactor power controller was built on the basis of an intelligent control algorithm. When implementing intelligent neural network principles, it is possible to improve the quality and dynamic of any control system in accordance with the principles of adaptive control. It is common knowledge that an adaptive control system permits adjusting the controller’s parameters according to the transitions in the characteristics of the control object or external disturbances. In this project, it is demonstrated that the propitious options for an automatic power controller in nuclear power plants is a control system constructed on intelligent neural network algorithms.

  10. Optical alignment procedure utilizing neural networks combined with Shack-Hartmann wavefront sensor

    NASA Astrophysics Data System (ADS)

    Adil, Fatime Zehra; Konukseven, Erhan İlhan; Balkan, Tuna; Adil, Ömer Faruk

    2017-05-01

    In the design of pilot helmets with night vision capability, to not limit or block the sight of the pilot, a transparent visor is used. The reflected image from the coated part of the visor must coincide with the physical human sight image seen through the nonreflecting regions of the visor. This makes the alignment of the visor halves critical. In essence, this is an alignment problem of two optical parts that are assembled together during the manufacturing process. Shack-Hartmann wavefront sensor is commonly used for the determination of the misalignments through wavefront measurements, which are quantified in terms of the Zernike polynomials. Although the Zernike polynomials provide very useful feedback about the misalignments, the corrective actions are basically ad hoc. This stems from the fact that there exists no easy inverse relation between the misalignment measurements and the physical causes of the misalignments. This study aims to construct this inverse relation by making use of the expressive power of the neural networks in such complex relations. For this purpose, a neural network is designed and trained in MATLAB® regarding which types of misalignments result in which wavefront measurements, quantitatively given by Zernike polynomials. This way, manual and iterative alignment processes relying on trial and error will be replaced by the trained guesses of a neural network, so the alignment process is reduced to applying the counter actions based on the misalignment causes. Such a training requires data containing misalignment and measurement sets in fine detail, which is hard to obtain manually on a physical setup. For that reason, the optical setup is completely modeled in Zemax® software, and Zernike polynomials are generated for misalignments applied in small steps. The performance of the neural network is experimented and found promising in the actual physical setup.

  11. Investigation on trophic state index by artificial neural networks (case study: Dez Dam of Iran)

    NASA Astrophysics Data System (ADS)

    Saghi, H.; Karimi, L.; Javid, A. H.

    2015-06-01

    Dam construction and surface runoff control is one of the most common approaches for water-needs supply of human societies. However, the increasing development of social activities and hence the subsequent increase in environmental pollutants leads to deterioration of water quality in dam reservoirs and eutrophication process could be intensified. So, the water quality of reservoirs is now one of the key factors in operation and water quality management of reservoirs. Hence, maintaining the quality of the stored water and identification and examination of changes along time has been a constant concern of humans that involves the water authorities. Traditionally, empirical trophic state indices of dam reservoirs often defined based on changes in concentration of effective factors (nutrients) and its consequences (increase in chlorophyll a), have been used as an efficient tool in the definition of dam reservoirs quality. In recent years, modeling techniques such as artificial neural networks have enhanced the prediction capability and the accuracy of these studies. In this study, artificial neural networks have been applied to analyze eutrophication process in the Dez Dam reservoir in Iran. In this paper, feed forward neural network with one input layer, one hidden layer and one output layer was applied using MATLAB neural network toolbox for trophic state index (TSI) analysis in the Dez Dam reservoir. The input data of this network are effective parameters in the eutrophication: nitrogen cycle parameters and phosphorous cycle parameters and parameters that will be changed by eutrophication: Chl a, SD, DO and the output data is TSI. Based on the results from estimation of modified Carlson trophic state index, Dez Dam reservoir is considered to be eutrophic in the early July to mid-November and would be mesotrophic with decrease in temperature. Therefore, a decrease in water quality of the dam reservoir during the warm seasons is expectable. The results indicated that artificial neural network (ANN) is a suitable tool for quality modeling of reservoir of dam and increment and decrement of nutrients in trend of eutrophication. Therefore, ANN is a suitable tool for quality modeling of reservoir of dam.

  12. Neural network approaches to capture temporal information

    NASA Astrophysics Data System (ADS)

    van Veelen, Martijn; Nijhuis, Jos; Spaanenburg, Ben

    2000-05-01

    The automated design and construction of neural networks receives growing attention of the neural networks community. Both the growing availability of computing power and development of mathematical and probabilistic theory have had severe impact on the design and modelling approaches of neural networks. This impact is most apparent in the use of neural networks to time series prediction. In this paper, we give our views on past, contemporary and future design and modelling approaches to neural forecasting.

  13. The role of symmetry in neural networks and their Laplacian spectra.

    PubMed

    de Lange, Siemon C; van den Heuvel, Martijn P; de Reus, Marcel A

    2016-11-01

    Human and animal nervous systems constitute complexly wired networks that form the infrastructure for neural processing and integration of information. The organization of these neural networks can be analyzed using the so-called Laplacian spectrum, providing a mathematical tool to produce systems-level network fingerprints. In this article, we examine a characteristic central peak in the spectrum of neural networks, including anatomical brain network maps of the mouse, cat and macaque, as well as anatomical and functional network maps of human brain connectivity. We link the occurrence of this central peak to the level of symmetry in neural networks, an intriguing aspect of network organization resulting from network elements that exhibit similar wiring patterns. Specifically, we propose a measure to capture the global level of symmetry of a network and show that, for both empirical networks and network models, the height of the main peak in the Laplacian spectrum is strongly related to node symmetry in the underlying network. Moreover, examination of spectra of duplication-based model networks shows that neural spectra are best approximated using a trade-off between duplication and diversification. Taken together, our results facilitate a better understanding of neural network spectra and the importance of symmetry in neural networks. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. The Prediction of the Risk Level of Pulmonary Embolism and Deep Vein Thrombosis through Artificial Neural Network

    PubMed Central

    Agharezaei, Laleh; Agharezaei, Zhila; Nemati, Ali; Bahaadinbeigy, Kambiz; Keynia, Farshid; Baneshi, Mohammad Reza; Iranpour, Abedin; Agharezaei, Moslem

    2016-01-01

    Background: Venous thromboembolism is a common cause of mortality among hospitalized patients and yet it is preventable through detecting the precipitating factors and a prompt diagnosis by specialists. The present study has been carried out in order to assist specialists in the diagnosis and prediction of the risk level of pulmonary embolism in patients, by means of artificial neural network. Method: A number of 31 risk factors have been used in this study in order to evaluate the conditions of 294 patients hospitalized in 3 educational hospitals affiliated with Kerman University of Medical Sciences. Two types of artificial neural networks, namely Feed-Forward Back Propagation and Elman Back Propagation, were compared in this study. Results: Through an optimized artificial neural network model, an accuracy and risk level index of 93.23 percent was achieved and, subsequently, the results have been compared with those obtained from the perfusion scan of the patients. 86.61 percent of high risk patients diagnosed through perfusion scan diagnostic method were also diagnosed correctly through the method proposed in the present study. Conclusions: The results of this study can be a good resource for physicians, medical assistants, and healthcare staff to diagnose high risk patients more precisely and prevent the mortalities. Additionally, expenses and other unnecessary diagnostic methods such as perfusion scans can be efficiently reduced. PMID:28077893

  15. The Prediction of the Risk Level of Pulmonary Embolism and Deep Vein Thrombosis through Artificial Neural Network.

    PubMed

    Agharezaei, Laleh; Agharezaei, Zhila; Nemati, Ali; Bahaadinbeigy, Kambiz; Keynia, Farshid; Baneshi, Mohammad Reza; Iranpour, Abedin; Agharezaei, Moslem

    2016-10-01

    Venous thromboembolism is a common cause of mortality among hospitalized patients and yet it is preventable through detecting the precipitating factors and a prompt diagnosis by specialists. The present study has been carried out in order to assist specialists in the diagnosis and prediction of the risk level of pulmonary embolism in patients, by means of artificial neural network. A number of 31 risk factors have been used in this study in order to evaluate the conditions of 294 patients hospitalized in 3 educational hospitals affiliated with Kerman University of Medical Sciences. Two types of artificial neural networks, namely Feed-Forward Back Propagation and Elman Back Propagation, were compared in this study. Through an optimized artificial neural network model, an accuracy and risk level index of 93.23 percent was achieved and, subsequently, the results have been compared with those obtained from the perfusion scan of the patients. 86.61 percent of high risk patients diagnosed through perfusion scan diagnostic method were also diagnosed correctly through the method proposed in the present study. The results of this study can be a good resource for physicians, medical assistants, and healthcare staff to diagnose high risk patients more precisely and prevent the mortalities. Additionally, expenses and other unnecessary diagnostic methods such as perfusion scans can be efficiently reduced.

  16. A heuristic neural network initialization scheme for modeling nonlinear functions in engineering mechanics: continuous development

    NASA Astrophysics Data System (ADS)

    Pei, Jin-Song; Mai, Eric C.

    2007-04-01

    This paper introduces a continuous effort towards the development of a heuristic initialization methodology for constructing multilayer feedforward neural networks to model nonlinear functions. In this and previous studies that this work is built upon, including the one presented at SPIE 2006, the authors do not presume to provide a universal method to approximate arbitrary functions, rather the focus is given to the development of a rational and unambiguous initialization procedure that applies to the approximation of nonlinear functions in the specific domain of engineering mechanics. The applications of this exploratory work can be numerous including those associated with potential correlation and interpretation of the inner workings of neural networks, such as damage detection. The goal of this study is fulfilled by utilizing the governing physics and mathematics of nonlinear functions and the strength of the sigmoidal basis function. A step-by-step graphical procedure utilizing a few neural network prototypes as "templates" to approximate commonly seen memoryless nonlinear functions of one or two variables is further developed in this study. Decomposition of complex nonlinear functions into a summation of some simpler nonlinear functions is utilized to exploit this prototype-based initialization methodology. Training examples are presented to demonstrate the rationality and effciency of the proposed methodology when compared with the popular Nguyen-Widrow initialization algorithm. Future work is also identfied.

  17. Artificial neural network associated to UV/Vis spectroscopy for monitoring bioreactions in biopharmaceutical processes.

    PubMed

    Takahashi, Maria Beatriz; Leme, Jaci; Caricati, Celso Pereira; Tonso, Aldo; Fernández Núñez, Eutimio Gustavo; Rocha, José Celso

    2015-06-01

    Currently, mammalian cells are the most utilized hosts for biopharmaceutical production. The culture media for these cell lines include commonly in their composition a pH indicator. Spectroscopic techniques are used for biopharmaceutical process monitoring, among them, UV-Vis spectroscopy has found scarce applications. This work aimed to define artificial neural networks architecture and fit its parameters to predict some nutrients and metabolites, as well as viable cell concentration based on UV-Vis spectral data of mammalian cell bioprocess using phenol red in culture medium. The BHK-21 cell line was used as a mammalian cell model. Off-line spectra of supernatant samples taken from batches performed at different dissolved oxygen concentrations in two bioreactor configurations and with two pH control strategies were used to define two artificial neural networks. According to absolute errors, glutamine (0.13 ± 0.14 mM), glutamate (0.02 ± 0.02 mM), glucose (1.11 ± 1.70 mM), lactate (0.84 ± 0.68 mM) and viable cell concentrations (1.89 10(5) ± 1.90 10(5) cell/mL) were suitably predicted. The prediction error averages for monitored variables were lower than those previously reported using different spectroscopic techniques in combination with partial least squares or artificial neural network. The present work allows for UV-VIS sensor development, and decreases cost related to nutrients and metabolite quantifications.

  18. Synchronization Control of Neural Networks With State-Dependent Coefficient Matrices.

    PubMed

    Zhang, Junfeng; Zhao, Xudong; Huang, Jun

    2016-11-01

    This brief is concerned with synchronization control of a class of neural networks with state-dependent coefficient matrices. Being different from the existing drive-response neural networks in the literature, a novel model of drive-response neural networks is established. The concepts of uniformly ultimately bounded (UUB) synchronization and convex hull Lyapunov function are introduced. Then, by using the convex hull Lyapunov function approach, the UUB synchronization design of the drive-response neural networks is proposed, and a delay-independent control law guaranteeing the bounded synchronization of the neural networks is constructed. All present conditions are formulated in terms of bilinear matrix inequalities. By comparison, it is shown that the neural networks obtained in this brief are less conservative than those ones in the literature, and the bounded synchronization is suitable for the novel drive-response neural networks. Finally, an illustrative example is given to verify the validity of the obtained results.

  19. The Laplacian spectrum of neural networks

    PubMed Central

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  20. Studying emotion theories through connectivity analysis: Evidence from generalized psychophysiological interactions and graph theory.

    PubMed

    Huang, Yun-An; Jastorff, Jan; Van den Stock, Jan; Van de Vliet, Laura; Dupont, Patrick; Vandenbulcke, Mathieu

    2018-05-15

    Psychological construction models of emotion state that emotions are variable concepts constructed by fundamental psychological processes, whereas according to basic emotion theory, emotions cannot be divided into more fundamental units and each basic emotion is represented by a unique and innate neural circuitry. In a previous study, we found evidence for the psychological construction account by showing that several brain regions were commonly activated when perceiving different emotions (i.e. a general emotion network). Moreover, this set of brain regions included areas associated with core affect, conceptualization and executive control, as predicted by psychological construction models. Here we investigate directed functional brain connectivity in the same dataset to address two questions: 1) is there a common pathway within the general emotion network for the perception of different emotions and 2) if so, does this common pathway contain information to distinguish between different emotions? We used generalized psychophysiological interactions and information flow indices to examine the connectivity within the general emotion network. The results revealed a general emotion pathway that connects neural nodes involved in core affect, conceptualization, language and executive control. Perception of different emotions could not be accurately classified based on the connectivity patterns from the nodes of the general emotion pathway. Successful classification was achieved when connections outside the general emotion pathway were included. We propose that the general emotion pathway functions as a common pathway within the general emotion network and is involved in shared basic psychological processes across emotions. However, additional connections within the general emotion network are required to classify different emotions, consistent with a constructionist account. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Introduction to Neural Networks.

    DTIC Science & Technology

    1992-03-01

    parallel processing of information that can greatly reduce the time required to perform operations which are needed in pattern recognition. Neural network, Artificial neural network , Neural net, ANN.

  2. Reinforced Adversarial Neural Computer for de Novo Molecular Design.

    PubMed

    Putin, Evgeny; Asadulaev, Arip; Ivanenkov, Yan; Aladinskiy, Vladimir; Sanchez-Lengeling, Benjamin; Aspuru-Guzik, Alán; Zhavoronkov, Alex

    2018-06-12

    In silico modeling is a crucial milestone in modern drug design and development. Although computer-aided approaches in this field are well-studied, the application of deep learning methods in this research area is at the beginning. In this work, we present an original deep neural network (DNN) architecture named RANC (Reinforced Adversarial Neural Computer) for the de novo design of novel small-molecule organic structures based on the generative adversarial network (GAN) paradigm and reinforcement learning (RL). As a generator RANC uses a differentiable neural computer (DNC), a category of neural networks, with increased generation capabilities due to the addition of an explicit memory bank, which can mitigate common problems found in adversarial settings. The comparative results have shown that RANC trained on the SMILES string representation of the molecules outperforms its first DNN-based counterpart ORGANIC by several metrics relevant to drug discovery: the number of unique structures, passing medicinal chemistry filters (MCFs), Muegge criteria, and high QED scores. RANC is able to generate structures that match the distributions of the key chemical features/descriptors (e.g., MW, logP, TPSA) and lengths of the SMILES strings in the training data set. Therefore, RANC can be reasonably regarded as a promising starting point to develop novel molecules with activity against different biological targets or pathways. In addition, this approach allows scientists to save time and covers a broad chemical space populated with novel and diverse compounds.

  3. Learning control of inverted pendulum system by neural network driven fuzzy reasoning: The learning function of NN-driven fuzzy reasoning under changes of reasoning environment

    NASA Technical Reports Server (NTRS)

    Hayashi, Isao; Nomura, Hiroyoshi; Wakami, Noboru

    1991-01-01

    Whereas conventional fuzzy reasonings are associated with tuning problems, which are lack of membership functions and inference rule designs, a neural network driven fuzzy reasoning (NDF) capable of determining membership functions by neural network is formulated. In the antecedent parts of the neural network driven fuzzy reasoning, the optimum membership function is determined by a neural network, while in the consequent parts, an amount of control for each rule is determined by other plural neural networks. By introducing an algorithm of neural network driven fuzzy reasoning, inference rules for making a pendulum stand up from its lowest suspended point are determined for verifying the usefulness of the algorithm.

  4. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Equalization of Synaptic Efficacy by Synchronous Neural Activity

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won; Choi, M. Y.

    2007-11-01

    It is commonly believed that spike timings of a postsynaptic neuron tend to follow those of the presynaptic neuron. Such orthodromic firing may, however, cause a conflict with the functional integrity of complex neuronal networks due to asymmetric temporal Hebbian plasticity. We argue that reversed spike timing in a synapse is a typical phenomenon in the cortex, which has a stabilizing effect on the neuronal network structure. We further demonstrate how the firing causality in a synapse is perturbed by synchronous neural activity and how the equilibrium property of spike-timing dependent plasticity is determined principally by the degree of synchronization. Remarkably, even noise-induced activity and synchrony of neurons can result in equalization of synaptic efficacy.

  6. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  7. Augmented Lagrange Programming Neural Network for Localization Using Time-Difference-of-Arrival Measurements.

    PubMed

    Han, Zifa; Leung, Chi Sing; So, Hing Cheung; Constantinides, Anthony George

    2017-08-15

    A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.

  8. Development of human locomotion.

    PubMed

    Lacquaniti, Francesco; Ivanenko, Yuri P; Zago, Myrka

    2012-10-01

    Neural control of locomotion in human adults involves the generation of a small set of basic patterned commands directed to the leg muscles. The commands are generated sequentially in time during each step by neural networks located in the spinal cord, called Central Pattern Generators. This review outlines recent advances in understanding how motor commands are expressed at different stages of human development. Similar commands are found in several other vertebrates, indicating that locomotion development follows common principles of organization of the control networks. Movements show a high degree of flexibility at all stages of development, which is instrumental for learning and exploration of variable interactions with the environment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. 3-D components of a biological neural network visualized in computer generated imagery. I - Macular receptive field organization

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cutler, Lynn; Meyer, Glenn; Lam, Tony; Vaziri, Parshaw

    1990-01-01

    Computer-assisted, 3-dimensional reconstructions of macular receptive fields and of their linkages into a neural network have revealed new information about macular functional organization. Both type I and type II hair cells are included in the receptive fields. The fields are rounded, oblong, or elongated, but gradations between categories are common. Cell polarizations are divergent. Morphologically, each calyx of oblong and elongated fields appears to be an information processing site. Intrinsic modulation of information processing is extensive and varies with the kind of field. Each reconstructed field differs in detail from every other, suggesting that an element of randomness is introduced developmentally and contributes to endorgan adaptability.

  10. Artificial intelligence models for predicting iron deficiency anemia and iron serum level based on accessible laboratory data.

    PubMed

    Azarkhish, Iman; Raoufy, Mohammad Reza; Gharibzadeh, Shahriar

    2012-06-01

    Iron deficiency anemia (IDA) is the most common nutritional deficiency worldwide. Measuring serum iron is time consuming, expensive and not available in most hospitals. In this study, based on four accessible laboratory data (MCV, MCH, MCHC, Hb/RBC), we developed an artificial neural network (ANN) and an adaptive neuro-fuzzy inference system (ANFIS) to diagnose the IDA and to predict serum iron level. Our results represent that the neural network analysis is superior to ANFIS and logistic regression models in diagnosing IDA. Moreover, the results show that the ANN is likely to provide an accurate test for predicting serum iron levels with high accuracy and acceptable precision.

  11. The mechanics of state dependent neural correlations

    PubMed Central

    Doiron, Brent; Litwin-Kumar, Ashok; Rosenbaum, Robert; Ocker, Gabriel K.; Josić, Krešimir

    2016-01-01

    Simultaneous recordings from large neural populations are becoming increasingly common. An important feature of the population activity are the trial-to-trial correlated fluctuations of the spike train outputs of recorded neuron pairs. Like the firing rate of single neurons, correlated activity can be modulated by a number of factors, from changes in arousal and attentional state to learning and task engagement. However, the network mechanisms that underlie these changes are not fully understood. We review recent theoretical results that identify three separate biophysical mechanisms that modulate spike train correlations: changes in input correlations, internal fluctuations, and the transfer function of single neurons. We first examine these mechanisms in feedforward pathways, and then show how the same approach can explain the modulation of correlations in recurrent networks. Such mechanistic constraints on the modulation of population activity will be important in statistical analyses of high dimensional neural data. PMID:26906505

  12. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    PubMed

    Nitta, Tohru

    2017-10-01

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  13. The effect of the neural activity on topological properties of growing neural networks.

    PubMed

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  14. LavaNet—Neural network development environment in a general mine planning package

    NASA Astrophysics Data System (ADS)

    Kapageridis, Ioannis Konstantinou; Triantafyllou, A. G.

    2011-04-01

    LavaNet is a series of scripts written in Perl that gives access to a neural network simulation environment inside a general mine planning package. A well known and a very popular neural network development environment, the Stuttgart Neural Network Simulator, is used as the base for the development of neural networks. LavaNet runs inside VULCAN™—a complete mine planning package with advanced database, modelling and visualisation capabilities. LavaNet is taking advantage of VULCAN's Perl based scripting environment, Lava, to bring all the benefits of neural network development and application to geologists, mining engineers and other users of the specific mine planning package. LavaNet enables easy development of neural network training data sets using information from any of the data and model structures available, such as block models and drillhole databases. Neural networks can be trained inside VULCAN™ and the results be used to generate new models that can be visualised in 3D. Direct comparison of developed neural network models with conventional and geostatistical techniques is now possible within the same mine planning software package. LavaNet supports Radial Basis Function networks, Multi-Layer Perceptrons and Self-Organised Maps.

  15. On the improvement of neural cryptography using erroneous transmitted information with error prediction.

    PubMed

    Allam, Ahmed M; Abbas, Hazem M

    2010-12-01

    Neural cryptography deals with the problem of "key exchange" between two neural networks using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between the two communicating parties is eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process. Therefore, diminishing the probability of such a threat improves the reliability of exchanging the output bits through a public channel. The synchronization with feedback algorithm is one of the existing algorithms that enhances the security of neural cryptography. This paper proposes three new algorithms to enhance the mutual learning process. They mainly depend on disrupting the attacker confidence in the exchanged outputs and input patterns during training. The first algorithm is called "Do not Trust My Partner" (DTMP), which relies on one party sending erroneous output bits, with the other party being capable of predicting and correcting this error. The second algorithm is called "Synchronization with Common Secret Feedback" (SCSFB), where inputs are kept partially secret and the attacker has to train its network on input patterns that are different from the training sets used by the communicating parties. The third algorithm is a hybrid technique combining the features of the DTMP and SCSFB. The proposed approaches are shown to outperform the synchronization with feedback algorithm in the time needed for the parties to synchronize.

  16. Fully convolutional neural networks for polyp segmentation in colonoscopy

    NASA Astrophysics Data System (ADS)

    Brandao, Patrick; Mazomenos, Evangelos; Ciuti, Gastone; Caliò, Renato; Bianchi, Federico; Menciassi, Arianna; Dario, Paolo; Koulaouzidis, Anastasios; Arezzo, Alberto; Stoyanov, Danail

    2017-03-01

    Colorectal cancer (CRC) is one of the most common and deadliest forms of cancer, accounting for nearly 10% of all forms of cancer in the world. Even though colonoscopy is considered the most effective method for screening and diagnosis, the success of the procedure is highly dependent on the operator skills and level of hand-eye coordination. In this work, we propose to adapt fully convolution neural networks (FCN), to identify and segment polyps in colonoscopy images. We converted three established networks into a fully convolution architecture and fine-tuned their learned representations to the polyp segmentation task. We validate our framework on the 2015 MICCAI polyp detection challenge dataset, surpassing the state-of-the-art in automated polyp detection. Our method obtained high segmentation accuracy and a detection precision and recall of 73.61% and 86.31%, respectively.

  17. Creative-Dynamics Approach To Neural Intelligence

    NASA Technical Reports Server (NTRS)

    Zak, Michail A.

    1992-01-01

    Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.

  18. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory.

    PubMed

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    PubMed Central

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  20. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  1. Modeling Behavioral Experiment Interaction and Environmental Stimuli for a Synthetic C. elegans

    PubMed Central

    Mujika, Andoni; Leškovský, Peter; Álvarez, Roberto; Otaduy, Miguel A.; Epelde, Gorka

    2017-01-01

    This paper focusses on the simulation of the neural network of the Caenorhabditis elegans living organism, and more specifically in the modeling of the stimuli applied within behavioral experiments and the stimuli that is generated in the interaction of the C. elegans with the environment. To the best of our knowledge, all efforts regarding stimuli modeling for the C. elegansare focused on a single type of stimulus, which is usually tested with a limited subnetwork of the C. elegansneural system. In this paper, we follow a different approach where we model a wide-range of different stimuli, with more flexible neural network configurations and simulations in mind. Moreover, we focus on the stimuli sensation by different types of sensory organs or various sensory principles of the neurons. As part of this work, most common stimuli involved in behavioral assays have been modeled. It includes models for mechanical, thermal, chemical, electrical and light stimuli, and for proprioception-related self-sensed information exchange with the neural network. The developed models have been implemented and tested with the hardware-based Si elegans simulation platform. PMID:29276485

  2. Physiological and neural correlates of worry and rumination: Support for the contrast avoidance model of worry.

    PubMed

    Steinfurth, Elisa C K; Alius, Manuela G; Wendt, Julia; Hamm, Alfons O

    2017-02-01

    The current experiments tested neural and physiological correlates of worry and rumination in comparison to thinking about neutral events. According to the avoidance model-stating that worry is a strategy to reduce intense emotions-physiological and neurobiological activity during worried thinking should not differ from activation during neutral thinking. According to the contrast avoidance model-stating that worry is a strategy to reduce abrupt shifts of emotions-activity should be increased. To test these competing models, we induced worry and neutral thinking in healthy participants using personal topics. A rumination condition was added to investigate the specificity of changes induced by the mental process. Two experiments were conducted assessing the effects on different response levels: (1) neural activation using fMRI, and (2) physiological response mobilization using startle and autonomic measures. During worry, participants showed a potentiated startle response and BOLD activity indicative of emotional network activation. These data partly support the contrast avoidance model of worry. Both mental processes showed elevated activity in a common network referred to as default network indicating self-referential activity. © 2016 Society for Psychophysiological Research.

  3. Neural network to diagnose lining condition

    NASA Astrophysics Data System (ADS)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  4. Neural correlates of focused attention during a brief mindfulness induction.

    PubMed

    Dickenson, Janna; Berkman, Elliot T; Arch, Joanna; Lieberman, Matthew D

    2013-01-01

    Mindfulness meditation-the practice of attending to present moment experience and allowing emotions and thoughts to pass without judgment-has shown to be beneficial in clinical populations across diverse outcomes. However, the basic neural mechanisms by which mindfulness operates and relates to everyday outcomes in novices remain unexplored. Focused attention is a common mindfulness induction where practitioners focus on specific physical sensations, typically the breath. The present study explores the neural mechanisms of this common mindfulness induction among novice practitioners. Healthy novice participants completed a brief task with both mindful attention [focused breathing (FB)] and control (unfocused attention) conditions during functional magnetic resonance imaging (fMRI). Relative to the control condition, FB recruited an attention network including parietal and prefrontal structures and trait-level mindfulness during this comparison also correlated with parietal activation. Results suggest that the neural mechanisms of a brief mindfulness induction are related to attention processes in novices and that trait mindfulness positively moderates this activation.

  5. [Measurement and performance analysis of functional neural network].

    PubMed

    Li, Shan; Liu, Xinyu; Chen, Yan; Wan, Hong

    2018-04-01

    The measurement of network is one of the important researches in resolving neuronal population information processing mechanism using complex network theory. For the quantitative measurement problem of functional neural network, the relation between the measure indexes, i.e. the clustering coefficient, the global efficiency, the characteristic path length and the transitivity, and the network topology was analyzed. Then, the spike-based functional neural network was established and the simulation results showed that the measured network could represent the original neural connections among neurons. On the basis of the former work, the coding of functional neural network in nidopallium caudolaterale (NCL) about pigeon's motion behaviors was studied. We found that the NCL functional neural network effectively encoded the motion behaviors of the pigeon, and there were significant differences in four indexes among the left-turning, the forward and the right-turning. Overall, the establishment method of spike-based functional neural network is available and it is an effective tool to parse the brain information processing mechanism.

  6. Neural network error correction for solving coupled ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  7. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  8. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    PubMed

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Stable architectures for deep neural networks

    NASA Astrophysics Data System (ADS)

    Haber, Eldad; Ruthotto, Lars

    2018-01-01

    Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.

  10. Thermalnet: a Deep Convolutional Network for Synthetic Thermal Image Generation

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Gorbatsevich, V. S.; Mizginov, V. A.

    2017-05-01

    Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.

  11. Determining geophysical properties from well log data using artificial neural networks and fuzzy inference systems

    NASA Astrophysics Data System (ADS)

    Chang, Hsien-Cheng

    Two novel synergistic systems consisting of artificial neural networks and fuzzy inference systems are developed to determine geophysical properties by using well log data. These systems are employed to improve the determination accuracy in carbonate rocks, which are generally more complex than siliciclastic rocks. One system, consisting of a single adaptive resonance theory (ART) neural network and three fuzzy inference systems (FISs), is used to determine the permeability category. The other system, which is composed of three ART neural networks and a single FIS, is employed to determine the lithofacies. The geophysical properties studied in this research, permeability category and lithofacies, are treated as categorical data. The permeability values are transformed into a "permeability category" to account for the effects of scale differences between core analyses and well logs, and heterogeneity in the carbonate rocks. The ART neural networks dynamically cluster the input data sets into different groups. The FIS is used to incorporate geologic experts' knowledge, which is usually in linguistic forms, into systems. These synergistic systems thus provide viable alternative solutions to overcome the effects of heterogeneity, the uncertainties of carbonate rock depositional environments, and the scarcity of well log data. The results obtained in this research show promising improvements over backpropagation neural networks. For the permeability category, the prediction accuracies are 68.4% and 62.8% for the multiple-single ART neural network-FIS and a single backpropagation neural network, respectively. For lithofacies, the prediction accuracies are 87.6%, 79%, and 62.8% for the single-multiple ART neural network-FIS, a single ART neural network, and a single backpropagation neural network, respectively. The sensitivity analysis results show that the multiple-single ART neural networks-FIS and a single ART neural network possess the same matching trends in determining lithofacies. This research shows that the adaptive resonance theory neural networks enable decision-makers to clearly distinguish the importance of different pieces of data which are useful in three-dimensional subsurface modeling. Geologic experts' knowledge can be easily applied and maintained by using the fuzzy inference systems.

  12. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  13. Application of the ANNA neural network chip to high-speed character recognition.

    PubMed

    Sackinger, E; Boser, B E; Bromley, J; Lecun, Y; Jackel, L D

    1992-01-01

    A neural network with 136000 connections for recognition of handwritten digits has been implemented using a mixed analog/digital neural network chip. The neural network chip is capable of processing 1000 characters/s. The recognition system has essentially the same rate (5%) as a simulation of the network with 32-b floating-point precision.

  14. Identification of Common Neural Circuit Disruptions in Cognitive Control Across Psychiatric Disorders.

    PubMed

    McTeague, Lisa M; Huemer, Julia; Carreon, David M; Jiang, Ying; Eickhoff, Simon B; Etkin, Amit

    2017-07-01

    Cognitive deficits are a common feature of psychiatric disorders. The authors investigated the nature of disruptions in neural circuitry underlying cognitive control capacities across psychiatric disorders through a transdiagnostic neuroimaging meta-analysis. A PubMed search was conducted for whole-brain functional neuroimaging articles published through June 2015 that compared activation in patients with axis I disorders and matched healthy control participants during cognitive control tasks. Tasks that probed performance or conflict monitoring, response inhibition or selection, set shifting, verbal fluency, and recognition or working memory were included. Activation likelihood estimation meta-analyses were conducted on peak voxel coordinates. The 283 experiments submitted to meta-analysis included 5,728 control participants and 5,493 patients with various disorders (schizophrenia, bipolar or unipolar depression, anxiety disorders, and substance use disorders). Transdiagnostically abnormal activation was evident in the left prefrontal cortex as well as the anterior insula, the right ventrolateral prefrontal cortex, the right intraparietal sulcus, and the midcingulate/presupplementary motor area. Disruption was also observed in a more anterior cluster in the dorsal cingulate cortex, which overlapped with a network of structural perturbation that the authors previously reported in a transdiagnostic meta-analysis of gray matter volume. These findings demonstrate a common pattern of disruption across major psychiatric disorders that parallels the "multiple-demand network" observed in intact cognition. This network interfaces with the anterior-cingulo-insular or "salience network" demonstrated to be transdiagnostically vulnerable to gray matter reduction. Thus, networks intrinsic to adaptive, flexible cognition are vulnerable to broad-spectrum psychopathology. Dysfunction in these networks may reflect an intermediate transdiagnostic phenotype, which could be leveraged to advance therapeutics.

  15. Lesion Mapping the Four-Factor Structure of Emotional Intelligence

    PubMed Central

    Operskalski, Joachim T.; Paul, Erick J.; Colom, Roberto; Barbey, Aron K.; Grafman, Jordan

    2015-01-01

    Emotional intelligence (EI) refers to an individual’s ability to process and respond to emotions, including recognizing the expression of emotions in others, using emotions to enhance thought and decision making, and regulating emotions to drive effective behaviors. Despite their importance for goal-directed social behavior, little is known about the neural mechanisms underlying specific facets of EI. Here, we report findings from a study investigating the neural bases of these specific components for EI in a sample of 130 combat veterans with penetrating traumatic brain injury. We examined the neural mechanisms underlying experiential (perceiving and using emotional information) and strategic (understanding and managing emotions) facets of EI. Factor scores were submitted to voxel-based lesion symptom mapping to elucidate their neural substrates. The results indicate that two facets of EI (perceiving and managing emotions) engage common and distinctive neural systems, with shared dependence on the social knowledge network, and selective engagement of the orbitofrontal and parietal cortex for strategic aspects of emotional information processing. The observed pattern of findings suggests that sub-facets of experiential and strategic EI can be characterized as separable but related processes that depend upon a core network of brain structures within frontal, temporal and parietal cortex. PMID:26858627

  16. Neural-adaptive control of single-master-multiple-slaves teleoperation for coordinated multiple mobile manipulators with time-varying communication delays and input uncertainties.

    PubMed

    Li, Zhijun; Su, Chun-Yi

    2013-09-01

    In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.

  17. Machine Learning and Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Chapline, George

    The author has previously pointed out some similarities between selforganizing neural networks and quantum mechanics. These types of neural networks were originally conceived of as away of emulating the cognitive capabilities of the human brain. Recently extensions of these networks, collectively referred to as deep learning networks, have strengthened the connection between self-organizing neural networks and human cognitive capabilities. In this note we consider whether hardware quantum devices might be useful for emulating neural networks with human-like cognitive capabilities, or alternatively whether implementations of deep learning neural networks using conventional computers might lead to better algorithms for solving the many body Schrodinger equation.

  18. Using fuzzy logic to integrate neural networks and knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Yen, John

    1991-01-01

    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.

  19. A neural network application to classification of health status of HIV/AIDS patients.

    PubMed

    Kwak, N K; Lee, C

    1997-04-01

    This paper presents an application of neural networks to classify and to predict the health status of HIV/AIDS patients. A neural network model in classifying both the well and not-well health status of HIV/AIDS patients is developed and evaluated in terms of validity and reliability of the test. Several different neural network topologies are applied to AIDS Cost and Utilization Survey (ACSUS) datasets in order to demonstrate the neural network's capability.

  20. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    NASA Astrophysics Data System (ADS)

    Chernoded, Andrey; Dudko, Lev; Myagkov, Igor; Volkov, Petr

    2017-10-01

    Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  1. Improvement of the Hopfield Neural Network by MC-Adaptation Rule

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Zhao, Hong

    2006-06-01

    We show that the performance of the Hopfield neural networks, especially the quality of the recall and the capacity of the effective storing, can be greatly improved by making use of a recently presented neural network designing method without altering the whole structure of the network. In the improved neural network, a memory pattern is recalled exactly from initial states having a given degree of similarity with the memory pattern, and thus one can avoids to apply the overlap criterion as carried out in the Hopfield neural networks.

  2. The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.

    PubMed

    Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun

    2018-01-01

    Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.

  3. An adaptive Hinfinity controller design for bank-to-turn missiles using ridge Gaussian neural networks.

    PubMed

    Lin, Chuan-Kai; Wang, Sheng-De

    2004-11-01

    A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the Hinfinity control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with Hinfinity stabilization.

  4. Constraint satisfaction adaptive neural network and heuristics combined approaches for generalized job-shop scheduling.

    PubMed

    Yang, S; Wang, D

    2000-01-01

    This paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.

  5. Financial time series prediction using spiking neural networks.

    PubMed

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  6. Sub-0.1 μm optical track width measurement

    NASA Astrophysics Data System (ADS)

    Smith, Richard J.; See, Chung W.; Somekh, Mike G.; Yacoot, Andrew

    2005-08-01

    In this paper, we will describe a technique that combines a common path scanning optical interferometer with artificial neural networks (ANN), to perform track width measurements that are significantly beyond the capability of conventional optical systems. Artificial neural networks have been used for many different applications. In the present case, ANNs are trained using profiles of known samples obtained from the scanning interferometer. They are then applied to tracks that have not previously been exposed to the networks. This paper will discuss the impacts of various ANN configurations, and the processing of the input signal on the training of the network. The profiles of the samples, which are used as the inputs to the ANNs, are obtained with a common path scanning optical interferometer. It provides extremely repeatable measurements, with very high signal to noise ratio, both are essential for the working of the ANNs. The characteristics of the system will be described. A number of samples with line widths ranging from 60nm-3μm have been measured to test the system. The system can measure line widths down to 60nm with a standard deviation of 3nm using optical wavelength of 633nm and a system numerical aperture of 0.3. These results will be presented in detail along with a discussion of the potential of this technique.

  7. Non-Intrusive Gaze Tracking Using Artificial Neural Networks

    DTIC Science & Technology

    1994-01-05

    We have developed an artificial neural network based gaze tracking, system which can be customized to individual users. A three layer feed forward...empirical analysis of the performance of a large number of artificial neural network architectures for this task. Suggestions for further explorations...for neurally based gaze trackers are presented, and are related to other similar artificial neural network applications such as autonomous road following.

  8. Common and unique gray matter correlates of episodic memory dysfunction in frontotemporal dementia and Alzheimer's disease.

    PubMed

    Irish, Muireann; Piguet, Olivier; Hodges, John R; Hornberger, Michael

    2014-04-01

    Conflicting evidence exists regarding the integrity of episodic memory in the behavioral variant of frontotemporal dementia (bvFTD). Recent converging evidence suggests that episodic memory in progressive cases of bvFTD is compromised to the same extent as in Alzheimer's disease (AD). The underlying neural substrates of these episodic memory deficits, however, likely differ contingent on dementia type. In this study we sought to elucidate the neural substrates of episodic memory performance, across recall and recognition tasks, in both patient groups using voxel-based morphometry (VBM) analyses. We predicted that episodic memory dysfunction would be apparent in both patient groups but would relate to divergent patterns of neural atrophy specific to each dementia type. We assessed episodic memory, across verbal and visual domains, in 19 bvFTD, 18 AD patients, and 19 age- and education-matched controls. Behaviorally, patient groups were indistinguishable for immediate and delayed recall, across verbal and visual domains. Whole-brain VBM analyses revealed regions commonly implicated in episodic retrieval across groups, namely the right temporal pole, right frontal lobe, left paracingulate gyrus, and right anterior hippocampus. Divergent neural networks specific to each group were also identified. Whereas a widespread network including posterior regions such as the posterior cingulate cortex, parietal and occipital cortices was exclusively implicated in AD, the frontal and anterior temporal lobes underpinned the episodic memory deficits in bvFTD. Our results point to distinct neural changes underlying episodic memory decline specific to each dementia syndrome. Copyright © 2013 Wiley Periodicals, Inc.

  9. Neural dynamics based on the recognition of neural fingerprints

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2015-01-01

    Experimental evidence has revealed the existence of characteristic spiking features in different neural signals, e.g., individual neural signatures identifying the emitter or functional signatures characterizing specific tasks. These neural fingerprints may play a critical role in neural information processing, since they allow receptors to discriminate or contextualize incoming stimuli. This could be a powerful strategy for neural systems that greatly enhances the encoding and processing capacity of these networks. Nevertheless, the study of information processing based on the identification of specific neural fingerprints has attracted little attention. In this work, we study (i) the emerging collective dynamics of a network of neurons that communicate with each other by exchange of neural fingerprints and (ii) the influence of the network topology on the self-organizing properties within the network. Complex collective dynamics emerge in the network in the presence of stimuli. Predefined inputs, i.e., specific neural fingerprints, are detected and encoded into coexisting patterns of activity that propagate throughout the network with different spatial organization. The patterns evoked by a stimulus can survive after the stimulation is over, which provides memory mechanisms to the network. The results presented in this paper suggest that neural information processing based on neural fingerprints can be a plausible, flexible, and powerful strategy. PMID:25852531

  10. Structural reliability calculation method based on the dual neural network and direct integration method.

    PubMed

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  11. Patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks.

    PubMed

    Aguiar, Manuela A D; Dias, Ana Paula S; Ferreira, Flora

    2017-01-01

    We consider feed-forward and auto-regulation feed-forward neural (weighted) coupled cell networks. In feed-forward neural networks, cells are arranged in layers such that the cells of the first layer have empty input set and cells of each other layer receive only inputs from cells of the previous layer. An auto-regulation feed-forward neural coupled cell network is a feed-forward neural network where additionally some cells of the first layer have auto-regulation, that is, they have a self-loop. Given a network structure, a robust pattern of synchrony is a space defined in terms of equalities of cell coordinates that is flow-invariant for any coupled cell system (with additive input structure) associated with the network. In this paper, we describe the robust patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks. Regarding feed-forward neural networks, we show that only cells in the same layer can synchronize. On the other hand, in the presence of auto-regulation, we prove that cells in different layers can synchronize in a robust way and we give a characterization of the possible patterns of synchrony that can occur for auto-regulation feed-forward neural networks.

  12. Pattern classification and recognition of invertebrate functional groups using self-organizing neural networks.

    PubMed

    Zhang, WenJun

    2007-07-01

    Self-organizing neural networks can be used to mimic non-linear systems. The main objective of this study is to make pattern classification and recognition on sampling information using two self-organizing neural network models. Invertebrate functional groups sampled in the irrigated rice field were classified and recognized using one-dimensional self-organizing map and self-organizing competitive learning neural networks. Comparisons between neural network models, distance (similarity) measures, and number of neurons were conducted. The results showed that self-organizing map and self-organizing competitive learning neural network models were effective in pattern classification and recognition of sampling information. Overall the performance of one-dimensional self-organizing map neural network was better than self-organizing competitive learning neural network. The number of neurons could determine the number of classes in the classification. Different neural network models with various distance (similarity) measures yielded similar classifications. Some differences, dependent upon the specific network structure, would be found. The pattern of an unrecognized functional group was recognized with the self-organizing neural network. A relative consistent classification indicated that the following invertebrate functional groups, terrestrial blood sucker; terrestrial flyer; tourist (nonpredatory species with no known functional role other than as prey in ecosystem); gall former; collector (gather, deposit feeder); predator and parasitoid; leaf miner; idiobiont (acarine ectoparasitoid), were classified into the same group, and the following invertebrate functional groups, external plant feeder; terrestrial crawler, walker, jumper or hunter; neustonic (water surface) swimmer (semi-aquatic), were classified into another group. It was concluded that reliable conclusions could be drawn from comparisons of different neural network models that use different distance (similarity) measures. Results with the larger consistency will be more reliable.

  13. Cellular and synaptic network defects in autism

    PubMed Central

    Peça, João; Feng, Guoping

    2012-01-01

    Many candidate genes are now thought to confer susceptibility to autism spectrum disorder (ASD). Here we review four interrelated complexes, each composed of multiple families of genes that functionally coalesce on common cellular pathways. We illustrate a common thread in the organization of glutamatergic synapses and suggest a link between genes involved in Tuberous Sclerosis Complex, Fragile X syndrome, Angelman syndrome and several synaptic ASD candidate genes. When viewed in this context, progress in deciphering the molecular architecture of cellular protein-protein interactions together with the unraveling of synaptic dysfunction in neural networks may prove pivotal to advancing our understanding of ASDs. PMID:22440525

  14. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  15. Thermoelastic steam turbine rotor control based on neural network

    NASA Astrophysics Data System (ADS)

    Rzadkowski, Romuald; Dominiczak, Krzysztof; Radulski, Wojciech; Szczepanik, R.

    2015-12-01

    Considered here are Nonlinear Auto-Regressive neural networks with eXogenous inputs (NARX) as a mathematical model of a steam turbine rotor for controlling steam turbine stress on-line. In order to obtain neural networks that locate critical stress and temperature points in the steam turbine during transient states, an FE rotor model was built. This model was used to train the neural networks on the basis of steam turbine transient operating data. The training included nonlinearity related to steam turbine expansion, heat exchange and rotor material properties during transients. Simultaneous neural networks are algorithms which can be implemented on PLC controllers. This allows for the application neural networks to control steam turbine stress in industrial power plants.

  16. The use of artificial neural networks in experimental data acquisition and aerodynamic design

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1991-01-01

    It is proposed that an artificial neural network be used to construct an intelligent data acquisition system. The artificial neural networks (ANN) model has a potential for replacing traditional procedures as well as for use in computational fluid dynamics validation. Potential advantages of the ANN model are listed. As a proof of concept, the author modeled a NACA 0012 airfoil at specific conditions, using the neural network simulator NETS, developed by James Baffes of the NASA Johnson Space Center. The neural network predictions were compared to the actual data. It is concluded that artificial neural networks can provide an elegant and valuable class of mathematical tools for data analysis.

  17. Research on artificial neural network intrusion detection photochemistry based on the improved wavelet analysis and transformation

    NASA Astrophysics Data System (ADS)

    Li, Hong; Ding, Xue

    2017-03-01

    This paper combines wavelet analysis and wavelet transform theory with artificial neural network, through the pretreatment on point feature attributes before in intrusion detection, to make them suitable for improvement of wavelet neural network. The whole intrusion classification model gets the better adaptability, self-learning ability, greatly enhances the wavelet neural network for solving the problem of field detection invasion, reduces storage space, contributes to improve the performance of the constructed neural network, and reduces the training time. Finally the results of the KDDCup99 data set simulation experiment shows that, this method reduces the complexity of constructing wavelet neural network, but also ensures the accuracy of the intrusion classification.

  18. A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application.

    PubMed

    Li, Shuai; Li, Yangming; Wang, Zheng

    2013-03-01

    This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  20. Firing patterns transition and desynchronization induced by time delay in neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Shoufang; Zhang, Jiqian; Wang, Maosheng; Hu, Chin-Kun

    2018-06-01

    We used the Hindmarsh-Rose (HR) model (Hindmarsh and Rose, 1984) to study the effect of time delay on the transition of firing behaviors and desynchronization in neural networks. As time delay is increased, neural networks exhibit diversity of firing behaviors, including regular spiking or bursting and firing patterns transitions (FPTs). Meanwhile, the desynchronization of firing and unstable bursting with decreasing amplitude in neural system, are also increasingly enhanced with the increase of time delay. Furthermore, we also studied the effect of coupling strength and network randomness on these phenomena. Our results imply that time delays can induce transition and desynchronization of firing behaviors in neural networks. These findings provide new insight into the role of time delay in the firing activities of neural networks, and can help to better understand the firing phenomena in complex systems of neural networks. A possible mechanism in brain that can cause the increase of time delay is discussed.

  1. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    PubMed

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Applications of artificial neural nets in clinical biomechanics.

    PubMed

    Schöllhorn, W I

    2004-11-01

    The purpose of this article is to provide an overview of current applications of artificial neural networks in the area of clinical biomechanics. The body of literature on artificial neural networks grew intractably vast during the last 15 years. Conventional statistical models may present certain limitations that can be overcome by neural networks. Artificial neural networks in general are introduced, some limitations, and some proven benefits are discussed.

  3. Neural Networks for Rapid Design and Analysis

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.; Maghami, Peiman G.

    1998-01-01

    Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.

  4. Calibration of a Hall effect displacement measurement system for complex motion analysis using a neural network.

    PubMed

    Northey, G W; Oliver, M L; Rittenhouse, D M

    2006-01-01

    Biomechanics studies often require the analysis of position and orientation. Although a variety of transducer and camera systems can be utilized, a common inexpensive alternative is the Hall effect sensor. Hall effect sensors have been used extensively for one-dimensional position analysis but their non-linear behavior and cross-talk effects make them difficult to calibrate for effective and accurate two- and three-dimensional position and orientation analysis. The aim of this study was to develop and calibrate a displacement measurement system for a hydraulic-actuation joystick used for repetitive motion analysis of heavy equipment operators. The system utilizes an array of four Hall effect sensors that are all active during any joystick movement. This built-in redundancy allows the calibration to utilize fully connected feed forward neural networks in conjunction with a Microscribe 3D digitizer. A fully connected feed forward neural network with one hidden layer containing five neurons was developed. Results indicate that the ability of the neural network to accurately predict the x, y and z coordinates of the joystick handle was good with r(2) values of 0.98 and higher. The calibration technique was found to be equally as accurate when used on data collected 5 days after the initial calibration, indicating the system is robust and stable enough to not require calibration every time the joystick is used. This calibration system allowed an infinite number of joystick orientations and positions to be found within the range of joystick motion.

  5. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  6. Optimal input sizes for neural network de-interlacing

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee

    2009-02-01

    Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.

  7. Impact of leakage delay on bifurcation in high-order fractional BAM neural networks.

    PubMed

    Huang, Chengdai; Cao, Jinde

    2018-02-01

    The effects of leakage delay on the dynamics of neural networks with integer-order have lately been received considerable attention. It has been confirmed that fractional neural networks more appropriately uncover the dynamical properties of neural networks, but the results of fractional neural networks with leakage delay are relatively few. This paper primarily concentrates on the issue of bifurcation for high-order fractional bidirectional associative memory(BAM) neural networks involving leakage delay. The first attempt is made to tackle the stability and bifurcation of high-order fractional BAM neural networks with time delay in leakage terms in this paper. The conditions for the appearance of bifurcation for the proposed systems with leakage delay are firstly established by adopting time delay as a bifurcation parameter. Then, the bifurcation criteria of such system without leakage delay are successfully acquired. Comparative analysis wondrously detects that the stability performance of the proposed high-order fractional neural networks is critically weakened by leakage delay, they cannot be overlooked. Numerical examples are ultimately exhibited to attest the efficiency of the theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Coronary Artery Diagnosis Aided by Neural Network

    NASA Astrophysics Data System (ADS)

    Stefko, Kamil

    2007-01-01

    Coronary artery disease is due to atheromatous narrowing and subsequent occlusion of the coronary vessel. Application of optimised feed forward multi-layer back propagation neural network (MLBP) for detection of narrowing in coronary artery vessels is presented in this paper. The research was performed using 580 data records from traditional ECG exercise test confirmed by coronary arteriography results. Each record of training database included description of the state of a patient providing input data for the neural network. Level and slope of ST segment of a 12 lead ECG signal recorded at rest and after effort (48 floating point values) was the main component of input data for neural network was. Coronary arteriography results (verified the existence or absence of more than 50% stenosis of the particular coronary vessels) were used as a correct neural network training output pattern. More than 96% of cases were correctly recognised by especially optimised and a thoroughly verified neural network. Leave one out method was used for neural network verification so 580 data records could be used for training as well as for verification of neural network.

  9. A Network Neuroscience of Human Learning: Potential To Inform Quantitative Theories of Brain and Behavior

    PubMed Central

    Bassett, Danielle S.; Mattar, Marcelo G.

    2017-01-01

    Humans adapt their behavior to their external environment in a process often facilitated by learning. Efforts to describe learning empirically can be complemented by quantitative theories that map changes in neurophysiology to changes in behavior. In this review we highlight recent advances in network science that offer a sets of tools and a general perspective that may be particularly useful in understanding types of learning that are supported by distributed neural circuits. We describe recent applications of these tools to neuroimaging data that provide unique insights into adaptive neural processes, the attainment of knowledge, and the acquisition of new skills, forming a network neuroscience of human learning. While promising, the tools have yet to be linked to the well-formulated models of behavior that are commonly utilized in cognitive psychology. We argue that continued progress will require the explicit marriage of network approaches to neuroimaging data and quantitative models of behavior. PMID:28259554

  10. A Network Neuroscience of Human Learning: Potential to Inform Quantitative Theories of Brain and Behavior.

    PubMed

    Bassett, Danielle S; Mattar, Marcelo G

    2017-04-01

    Humans adapt their behavior to their external environment in a process often facilitated by learning. Efforts to describe learning empirically can be complemented by quantitative theories that map changes in neurophysiology to changes in behavior. In this review we highlight recent advances in network science that offer a sets of tools and a general perspective that may be particularly useful in understanding types of learning that are supported by distributed neural circuits. We describe recent applications of these tools to neuroimaging data that provide unique insights into adaptive neural processes, the attainment of knowledge, and the acquisition of new skills, forming a network neuroscience of human learning. While promising, the tools have yet to be linked to the well-formulated models of behavior that are commonly utilized in cognitive psychology. We argue that continued progress will require the explicit marriage of network approaches to neuroimaging data and quantitative models of behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A comparison of linear and nonlinear statistical techniques in performance attribution.

    PubMed

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  12. Artificial neural networks in mammography interpretation and diagnostic decision making.

    PubMed

    Ayer, Turgay; Chen, Qiushi; Burnside, Elizabeth S

    2013-01-01

    Screening mammography is the most effective means for early detection of breast cancer. Although general rules for discriminating malignant and benign lesions exist, radiologists are unable to perfectly detect and classify all lesions as malignant and benign, for many reasons which include, but are not limited to, overlap of features that distinguish malignancy, difficulty in estimating disease risk, and variability in recommended management. When predictive variables are numerous and interact, ad hoc decision making strategies based on experience and memory may lead to systematic errors and variability in practice. The integration of computer models to help radiologists increase the accuracy of mammography examinations in diagnostic decision making has gained increasing attention in the last two decades. In this study, we provide an overview of one of the most commonly used models, artificial neural networks (ANNs), in mammography interpretation and diagnostic decision making and discuss important features in mammography interpretation. We conclude by discussing several common limitations of existing research on ANN-based detection and diagnostic models and provide possible future research directions.

  13. Common and dissociable neural correlates associated with component processes of inductive reasoning.

    PubMed

    Jia, Xiuqin; Liang, Peipeng; Lu, Jie; Yang, Yanhui; Zhong, Ning; Li, Kuncheng

    2011-06-15

    The ability to draw numerical inductive reasoning requires two key cognitive processes, identification and extrapolation. This study aimed to identify the neural correlates of both component processes of numerical inductive reasoning using event-related fMRI. Three kinds of tasks: rule induction (RI), rule induction and application (RIA), and perceptual judgment (Jud) were solved by twenty right-handed adults. Our results found that the left superior parietal lobule (SPL) extending into the precuneus and left dorsolateral prefrontal cortex (DLPFC) were commonly recruited in the two components. It was also observed that the fronto-parietal network was more specific to identification, whereas the striatal-thalamic network was more specific to extrapolation. The findings suggest that numerical inductive reasoning is mediated by the coordination of multiple brain areas including the prefrontal, parietal, and subcortical regions, of which some are more specific to demands on only one of these two component processes, whereas others are sensitive to both. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  15. Altered Synchronizations among Neural Networks in Geriatric Depression

    PubMed Central

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G.; Steffens, David C.

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression. PMID:26180795

  16. Altered Synchronizations among Neural Networks in Geriatric Depression.

    PubMed

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  17. The prediction of nonlinear dynamic loads on helicopters from flight variables using artificial neural networks

    NASA Technical Reports Server (NTRS)

    Cook, A. B.; Fuller, C. R.; O'Brien, W. F.; Cabell, R. H.

    1992-01-01

    A method of indirectly monitoring component loads through common flight variables is proposed which requires an accurate model of the underlying nonlinear relationships. An artificial neural network (ANN) model learns relationships through exposure to a database of flight variable records and corresponding load histories from an instrumented military helicopter undergoing standard maneuvers. The ANN model, utilizing eight standard flight variables as inputs, is trained to predict normalized time-varying mean and oscillatory loads on two critical components over a range of seven maneuvers. Both interpolative and extrapolative capabilities are demonstrated with agreement between predicted and measured loads on the order of 90 percent to 95 percent. This work justifies pursuing the ANN method of predicting loads from flight variables.

  18. A spiking neural network based on the basal ganglia functional anatomy.

    PubMed

    Baladron, Javier; Hamker, Fred H

    2015-07-01

    We introduce a spiking neural network of the basal ganglia capable of learning stimulus-action associations. We model learning in the three major basal ganglia pathways, direct, indirect and hyperdirect, by spike time dependent learning and considering the amount of dopamine available (reward). Moreover, we allow to learn a cortico-thalamic pathway that bypasses the basal ganglia. As a result the system develops new functionalities for the different basal ganglia pathways: The direct pathway selects actions by disinhibiting the thalamus, the hyperdirect one suppresses alternatives and the indirect pathway learns to inhibit common mistakes. Numerical experiments show that the system is capable of learning sets of either deterministic or stochastic rules. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Detection of Atrial Fibrillation Using Artifical Neural Network with Power Spectrum Density of RR Interval of Electrocardiogram

    NASA Astrophysics Data System (ADS)

    Afdala, Adfal; Nuryani, Nuryani; Satrio Nugroho, Anto

    2017-01-01

    Atrial fibrillation (AF) is a disorder of the heart with fairly high mortality in adults. AF is a common heart arrythmia which is characterized by a missing or irregular contraction of atria. Therefore, finding a method to detect atrial fibrillation is necessary. In this article a system to detect atrial fibrillation has been proposed. Detection system utilized backpropagation artifical neural network. Data input in this method includes power spectrum density of R-peaks interval of electrocardiogram which is selected by wrapping method. This research uses parameter learning rate, momentum, epoch and hidden layer. System produces good performance with accuracy, sensitivity, and specificity of 83.55%, 86.72 % and 81.47 %, respectively.

  20. A consensual neural network

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Ersoy, O. K.; Swain, P. H.

    1991-01-01

    A neural network architecture called a consensual neural network (CNN) is proposed for the classification of data from multiple sources. Its relation to hierarchical and ensemble neural networks is discussed. CNN is based on the statistical consensus theory and uses nonlinearly transformed input data. The input data are transformed several times, and the different transformed data are applied as if they were independent inputs. The independent inputs are classified using stage neural networks and outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote-sensing data and geographic data are given.

  1. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  2. Feedback modulation of neural network synchrony and seizure susceptibility by Mdm2-p53-Nedd4-2 signaling.

    PubMed

    Jewett, Kathryn A; Christian, Catherine A; Bacos, Jonathan T; Lee, Kwan Young; Zhu, Jiuhe; Tsai, Nien-Pei

    2016-03-22

    Neural network synchrony is a critical factor in regulating information transmission through the nervous system. Improperly regulated neural network synchrony is implicated in pathophysiological conditions such as epilepsy. Despite the awareness of its importance, the molecular signaling underlying the regulation of neural network synchrony, especially after stimulation, remains largely unknown. In this study, we show that elevation of neuronal activity by the GABA(A) receptor antagonist, Picrotoxin, increases neural network synchrony in primary mouse cortical neuron cultures. The elevation of neuronal activity triggers Mdm2-dependent degradation of the tumor suppressor p53. We show here that blocking the degradation of p53 further enhances Picrotoxin-induced neural network synchrony, while promoting the inhibition of p53 with a p53 inhibitor reduces Picrotoxin-induced neural network synchrony. These data suggest that Mdm2-p53 signaling mediates a feedback mechanism to fine-tune neural network synchrony after activity stimulation. Furthermore, genetically reducing the expression of a direct target gene of p53, Nedd4-2, elevates neural network synchrony basally and occludes the effect of Picrotoxin. Finally, using a kainic acid-induced seizure model in mice, we show that alterations of Mdm2-p53-Nedd4-2 signaling affect seizure susceptibility. Together, our findings elucidate a critical role of Mdm2-p53-Nedd4-2 signaling underlying the regulation of neural network synchrony and seizure susceptibility and reveal potential therapeutic targets for hyperexcitability-associated neurological disorders.

  3. Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses

    PubMed Central

    Stephen, Emily P.; Lepage, Kyle Q.; Eden, Uri T.; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S.; Guenther, Frank H.; Kramer, Mark A.

    2014-01-01

    The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty—both in the functional network edges and the corresponding aggregate measures of network topology—are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here—appropriate for static and dynamic network inference and different statistical measures of coupling—permits the evaluation of confidence in network measures in a variety of settings common to neuroscience. PMID:24678295

  4. Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses.

    PubMed

    Stephen, Emily P; Lepage, Kyle Q; Eden, Uri T; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S; Guenther, Frank H; Kramer, Mark A

    2014-01-01

    The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty-both in the functional network edges and the corresponding aggregate measures of network topology-are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here-appropriate for static and dynamic network inference and different statistical measures of coupling-permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.

  5. The GEOS-5 Neural Network Retrieval for AOD

    NASA Astrophysics Data System (ADS)

    Castellanos, P.; da Silva, A. M., Jr.

    2017-12-01

    One of the difficulties in data assimilation is the need for multi-sensor data merging that can account for temporal and spatial biases between satellite sensors. In the Goddard Earth Observing System Model Version 5 (GEOS-5) aerosol data assimilation system, a neural network retrieval (NNR) is used as a mapping between satellite observed top of the atmosphere (TOA) reflectance and AOD, which is the target variable that is assimilated in the model. By training observations of TOA reflectance from multiple sensors to map to a common AOD dataset (in this case AOD observed by the ground based Aerosol Robotic Network, AERONET), we are able to create a global, homogenous, satellite data record of AOD from MODIS observations on board the Terra and Aqua satellites. In this talk, I will present the implementation of and recent updates to the GEOS-5 NNR for MODIS collection 6 data.

  6. Potential implementation of reservoir computing models based on magnetic skyrmions

    NASA Astrophysics Data System (ADS)

    Bourianoff, George; Pinna, Daniele; Sitte, Matthias; Everschor-Sitte, Karin

    2018-05-01

    Reservoir Computing is a type of recursive neural network commonly used for recognizing and predicting spatio-temporal events relying on a complex hierarchy of nested feedback loops to generate a memory functionality. The Reservoir Computing paradigm does not require any knowledge of the reservoir topology or node weights for training purposes and can therefore utilize naturally existing networks formed by a wide variety of physical processes. Most efforts to implement reservoir computing prior to this have focused on utilizing memristor techniques to implement recursive neural networks. This paper examines the potential of magnetic skyrmion fabrics and the complex current patterns which form in them as an attractive physical instantiation for Reservoir Computing. We argue that their nonlinear dynamical interplay resulting from anisotropic magnetoresistance and spin-torque effects allows for an effective and energy efficient nonlinear processing of spatial temporal events with the aim of event recognition and prediction.

  7. The GEOS-5 Neural Network Retrieval (NNR) for AOD

    NASA Technical Reports Server (NTRS)

    Castellanos, Patricia; Da Silva, Arlindo

    2017-01-01

    One of the difficulties in data assimilation is the need for multi-sensor data merging that can account for temporal and spatial biases between satellite sensors. In the Goddard Earth Observing System Model Version 5 (GEOS-5) aerosol data assimilation system, a neural network retrieval (NNR) is used as a mapping between satellite observed top of the atmosphere (TOA) reflectance and AOD, which is the target variable that is assimilated in the model. By training observations of TOA reflectance from multiple sensors to map to a common AOD dataset (in this case AOD observed by the ground based Aerosol Robotic Network, AERONET), we are able to create a global, homogenous, satellite data record of AOD from MODIS observations on board the Terra and Aqua satellites. In this talk, I will present the implementation of and recent updates to the GEOS-5 NNR for MODIS collection 6 data.

  8. Neural network-based model reference adaptive control system.

    PubMed

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  9. Space-Time Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  10. Ground-state coding in partially connected neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1989-01-01

    Patterns over (-1,0,1) define, by their outer products, partially connected neural networks, consisting of internally strongly connected, externally weakly connected subnetworks. The connectivity patterns may have highly organized structures, such as lattices and fractal trees or nests. Subpatterns over (-1,1) define the subcodes stored in the subnetwork, that agree in their common bits. It is first shown that the code words are locally stable stares of the network, provided that each of the subcodes consists of mutually orthogonal words or of, at most, two words. Then it is shown that if each of the subcodes consists of two orthogonal words, the code words are the unique ground states (absolute minima) of the Hamiltonian associated with the network. The regions of attraction associated with the code words are shown to grow with the number of subnetworks sharing each of the neurons. Depending on the particular network architecture, the code sizes of partially connected networks can be vastly greater than those of fully connected ones and their error correction capabilities can be significantly greater than those of the disconnected subnetworks. The codes associated with lattice-structured and hierarchical networks are discussed in some detail.

  11. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule.

    PubMed

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-11-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  12. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than themore » SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.« less

  13. Goal-Directed Decision Making with Spiking Neurons.

    PubMed

    Friedrich, Johannes; Lengyel, Máté

    2016-02-03

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.

  14. Goal-Directed Decision Making with Spiking Neurons

    PubMed Central

    Lengyel, Máté

    2016-01-01

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636

  15. Financial Time Series Prediction Using Spiking Neural Networks

    PubMed Central

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  16. Qualitative analysis of Cohen-Grossberg neural networks with multiple delays

    NASA Astrophysics Data System (ADS)

    Ye, Hui; Michel, Anthony N.; Wang, Kaining

    1995-03-01

    It is well known that a class of artificial neural networks with symmetric interconnections and without transmission delays, known as Cohen-Grossberg neural networks, possesses global stability (i.e., all trajectories tend to some equilibrium). We demonstrate in the present paper that many of the qualitative properties of Cohen-Grossberg networks will not be affected by the introduction of sufficiently small delays. Specifically, we establish some bound conditions for the time delays under which a given Cohen-Grossberg network with multiple delays is globally stable and possesses the same asymptotically stable equilibria as the corresponding network without delays. An effective method of determining the asymptotic stability of an equilibrium of a Cohen-Grossberg network with multiple delays is also presented. The present results are motivated by some of the authors earlier work [Phys. Rev. E 50, 4206 (1994)] and by some of the work of Marcus and Westervelt [Phys. Rev. A 39, 347 (1989)]. These works address qualitative analyses of Hopfield neural networks with one time delay. The present work generalizes these results to Cohen-Grossberg neural networks with multiple time delays. Hopfield neural networks constitute special cases of Cohen-Grossberg neural networks.

  17. Comprehensive neural networks for guilty feelings in young adults.

    PubMed

    Nakagawa, Seishu; Takeuchi, Hikaru; Taki, Yasuyuki; Nouchi, Rui; Sekiguchi, Atsushi; Kotozaki, Yuka; Miyauchi, Carlos Makoto; Iizuka, Kunio; Yokoyama, Ryoichi; Shinada, Takamitsu; Yamamoto, Yuki; Hanawa, Sugiko; Araki, Tsuyoshi; Hashizume, Hiroshi; Kunitoki, Keiko; Sassa, Yuko; Kawashima, Ryuta

    2015-01-15

    Feelings of guilt are associated with widespread self and social cognitions, e.g., empathy, moral reasoning, and punishment. Neural correlates directly related to the degree of feelings of guilt have not been detected, probably due to the small numbers of subjects, whereas there are growing numbers of neuroimaging studies of feelings of guilt. We hypothesized that the neural networks for guilty feelings are widespread and include the insula, inferior parietal lobule (IPL), amygdala, subgenual cingulate cortex (SCC), and ventromedial prefrontal cortex (vmPFC), which are essential for cognitions of guilt. We investigated the association between regional gray matter density (rGMD) and feelings of guilt in 764 healthy young students (422 males, 342 females; 20.7 ± 1.8 years) using magnetic resonance imaging and the guilty feeling scale (GFS) for the younger generation which comprises interpersonal situation (IPS) and rule-breaking situation (RBS) scores. Both the IPS and RBS were negatively related to the rGMD in the right posterior insula (PI). The IPS scores were negatively correlated with rGMD in the left anterior insula (AI), right IPL, and vmPFC using small volume correction. A post hoc analysis performed on the significant clusters identified through these analyses revealed that rGMD activity in the right IPL showed a significant negative association with the empathy quotient. These findings at the whole-brain level are the widespread comprehensive neural network regions for guilty feelings. Interestingly, the novel finding in this study is that the PI was implicated as a common region for feelings of guilt with interaction between the IPS and RBS. Additionally, the neural networks including the IPL were associated with empathy and with regions implicated in moral reasoning (AI and vmPFC), and punishment (AI). Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Dynamic Neural Networks Supporting Memory Retrieval

    PubMed Central

    St. Jacques, Peggy L.; Kragel, Philip A.; Rubin, David C.

    2011-01-01

    How do separate neural networks interact to support complex cognitive processes such as remembrance of the personal past? Autobiographical memory (AM) retrieval recruits a consistent pattern of activation that potentially comprises multiple neural networks. However, it is unclear how such large-scale neural networks interact and are modulated by properties of the memory retrieval process. In the present functional MRI (fMRI) study, we combined independent component analysis (ICA) and dynamic causal modeling (DCM) to understand the neural networks supporting AM retrieval. ICA revealed four task-related components consistent with the previous literature: 1) Medial Prefrontal Cortex (PFC) Network, associated with self-referential processes, 2) Medial Temporal Lobe (MTL) Network, associated with memory, 3) Frontoparietal Network, associated with strategic search, and 4) Cingulooperculum Network, associated with goal maintenance. DCM analysis revealed that the medial PFC network drove activation within the system, consistent with the importance of this network to AM retrieval. Additionally, memory accessibility and recollection uniquely altered connectivity between these neural networks. Recollection modulated the influence of the medial PFC on the MTL network during elaboration, suggesting that greater connectivity among subsystems of the default network supports greater re-experience. In contrast, memory accessibility modulated the influence of frontoparietal and MTL networks on the medial PFC network, suggesting that ease of retrieval involves greater fluency among the multiple networks contributing to AM. These results show the integration between neural networks supporting AM retrieval and the modulation of network connectivity by behavior. PMID:21550407

  19. Coherence resonance in bursting neural networks

    NASA Astrophysics Data System (ADS)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  20. Classification of Respiratory Sounds by Using An Artificial Neural Network

    DTIC Science & Technology

    2001-10-28

    CLASSIFICATION OF RESPIRATORY SOUNDS BY USING AN ARTIFICIAL NEURAL NETWORK M.C. Sezgin, Z. Dokur, T. Ölmez, M. Korürek Department of Electronics and...successfully classified by the GAL network. Keywords-Respiratory Sounds, Classification of Biomedical Signals, Artificial Neural Network . I. INTRODUCTION...process, feature extraction, and classification by the artificial neural network . At first, the RS signal obtained from a real-time measurement equipment is

  1. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    DTIC Science & Technology

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  2. A neural net approach to space vehicle guidance

    NASA Technical Reports Server (NTRS)

    Caglayan, Alper K.; Allen, Scott M.

    1990-01-01

    The space vehicle guidance problem is formulated using a neural network approach, and the appropriate neural net architecture for modeling optimum guidance trajectories is investigated. In particular, an investigation is made of the incorporation of prior knowledge about the characteristics of the optimal guidance solution into the neural network architecture. The online classification performance of the developed network is demonstrated using a synthesized network trained with a database of optimum guidance trajectories. Such a neural-network-based guidance approach can readily adapt to environment uncertainties such as those encountered by an AOTV during atmospheric maneuvers.

  3. Neural network and its application to CT imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W.

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  4. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  5. Quantitative analysis of volatile organic compounds using ion mobility spectra and cascade correlation neural networks

    NASA Technical Reports Server (NTRS)

    Harrington, Peter DEB.; Zheng, Peng

    1995-01-01

    Ion Mobility Spectrometry (IMS) is a powerful technique for trace organic analysis in the gas phase. Quantitative measurements are difficult, because IMS has a limited linear range. Factors that may affect the instrument response are pressure, temperature, and humidity. Nonlinear calibration methods, such as neural networks, may be ideally suited for IMS. Neural networks have the capability of modeling complex systems. Many neural networks suffer from long training times and overfitting. Cascade correlation neural networks train at very fast rates. They also build their own topology, that is a number of layers and number of units in each layer. By controlling the decay parameter in training neural networks, reproducible and general models may be obtained.

  6. Newly developed double neural network concept for reliable fast plasma position control

    NASA Astrophysics Data System (ADS)

    Jeon, Young-Mu; Na, Yong-Su; Kim, Myung-Rak; Hwang, Y. S.

    2001-01-01

    Neural network is considered as a parameter estimation tool in plasma controls for next generation tokamak such as ITER. The neural network has been reported to be so accurate and fast for plasma equilibrium identification that it may be applied to the control of complex tokamak plasmas. For this application, the reliability of the conventional neural network needs to be improved. In this study, a new idea of double neural network is developed to achieve this. The new idea has been applied to simple plasma position identification of KSTAR tokamak for feasibility test. Characteristics of the concept show higher reliability and fault tolerance even in severe faulty conditions, which may make neural network applicable to plasma control reliably and widely in future tokamaks.

  7. Rule extraction from minimal neural networks for credit card screening.

    PubMed

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  8. An improved wavelet neural network medical image segmentation algorithm with combined maximum entropy

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang

    2018-05-01

    In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.

  9. Knowledge extraction from evolving spiking neural networks with rank order population coding.

    PubMed

    Soltic, Snjezana; Kasabov, Nikola

    2010-12-01

    This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.

  10. Effects of Transcranial Direct Current Stimulation on Neural Networks in Young and Older Adults

    PubMed

    Martin, Andrew K; Meinzer, Marcus; Lindenberg, Robert; Sieg, Mira M; Nachtigall, Laura; Flöel, Agnes

    2017-11-01

    Transcranial direct current stimulation (tDCS) may be a viable tool to improve motor and cognitive function in advanced age. However, although a number of studies have demonstrated improved cognitive performance in older adults, other studies have failed to show restorative effects. The neural effects of beneficial stimulation response in both age groups is lacking. In the current study, tDCS was administered during simultaneous fMRI in 42 healthy young and older participants. Semantic word generation and motor speech baseline tasks were used to investigate behavioral and neural effects of uni- and bihemispheric motor cortex tDCS in a three-way, crossover, sham tDCS controlled design. Independent components analysis assessed differences in task-related activity between the two age groups and tDCS effects at the network level. We also explored whether laterality of language network organization was effected by tDCS. Behaviorally, both active tDCS conditions significantly improved semantic word retrieval performance in young and older adults and were comparable between groups and stimulation conditions. Network-level tDCS effects were identified in the ventral and dorsal anterior cingulate networks in the combined sample during semantic fluency and motor speech tasks. In addition, a shift toward enhanced left laterality was identified in the older adults for both active stimulation conditions. Thus, tDCS results in common network-level modulations and behavioral improvements for both age groups, with an additional effect of increasing left laterality in older adults.

  11. Genes and Experience in the Development of Executive Attention and Effortful Control

    ERIC Educational Resources Information Center

    Rothbart, Mary K.; Posner, Michael I.

    2005-01-01

    The executive attention network is involved in regulating emotions and cognitions, forming a neural basis for temperamental self-regulation. New brain imaging and molecular genetics methods can enhance our understanding of common mechanisms of self-regulation and individual differences in their expression.

  12. Adaptive neural network motion control of manipulators with experimental evaluations.

    PubMed

    Puga-Guzmán, S; Moreno-Valenzuela, J; Santibáñez, V

    2014-01-01

    A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller.

  13. Adaptive Neural Network Motion Control of Manipulators with Experimental Evaluations

    PubMed Central

    Puga-Guzmán, S.; Moreno-Valenzuela, J.; Santibáñez, V.

    2014-01-01

    A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller. PMID:24574910

  14. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  15. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  16. Establishing a Dynamic Self-Adaptation Learning Algorithm of the BP Neural Network and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min

    2015-12-01

    In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.

  17. Neural net target-tracking system using structured laser patterns

    NASA Astrophysics Data System (ADS)

    Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun

    1996-06-01

    In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.

  18. Chaotic simulated annealing by a neural network with a variable delay: design and application.

    PubMed

    Chen, Shyan-Shiou

    2011-10-01

    In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem. © 2011 IEEE

  19. Modeling and control of magnetorheological fluid dampers using neural networks

    NASA Astrophysics Data System (ADS)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  20. Deep neural networks for direct, featureless learning through observation: The case of two-dimensional spin models

    NASA Astrophysics Data System (ADS)

    Mills, Kyle; Tamblyn, Isaac

    2018-03-01

    We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4 ×4 Ising model. Using its success at this task, we motivate the study of the larger 8 ×8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration.

  1. Tensor Basis Neural Network v. 1.0 (beta)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ling, Julia; Templeton, Jeremy

    This software package can be used to build, train, and test a neural network machine learning model. The neural network architecture is specifically designed to embed tensor invariance properties by enforcing that the model predictions sit on an invariant tensor basis. This neural network architecture can be used in developing constitutive models for applications such as turbulence modeling, materials science, and electromagnetism.

  2. A renaissance of neural networks in drug discovery.

    PubMed

    Baskin, Igor I; Winkler, David; Tetko, Igor V

    2016-08-01

    Neural networks are becoming a very popular method for solving machine learning and artificial intelligence problems. The variety of neural network types and their application to drug discovery requires expert knowledge to choose the most appropriate approach. In this review, the authors discuss traditional and newly emerging neural network approaches to drug discovery. Their focus is on backpropagation neural networks and their variants, self-organizing maps and associated methods, and a relatively new technique, deep learning. The most important technical issues are discussed including overfitting and its prevention through regularization, ensemble and multitask modeling, model interpretation, and estimation of applicability domain. Different aspects of using neural networks in drug discovery are considered: building structure-activity models with respect to various targets; predicting drug selectivity, toxicity profiles, ADMET and physicochemical properties; characteristics of drug-delivery systems and virtual screening. Neural networks continue to grow in importance for drug discovery. Recent developments in deep learning suggests further improvements may be gained in the analysis of large chemical data sets. It's anticipated that neural networks will be more widely used in drug discovery in the future, and applied in non-traditional areas such as drug delivery systems, biologically compatible materials, and regenerative medicine.

  3. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    NASA Astrophysics Data System (ADS)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  4. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  5. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    PubMed Central

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  6. Deinterlacing using modular neural network

    NASA Astrophysics Data System (ADS)

    Woo, Dong H.; Eom, Il K.; Kim, Yoo S.

    2004-05-01

    Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.

  7. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    PubMed

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  9. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    PubMed Central

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  10. Periodicity and stability for variable-time impulsive neural networks.

    PubMed

    Li, Hongfei; Li, Chuandong; Huang, Tingwen

    2017-10-01

    The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks. Furthermore, a series of sufficient criteria are derived to ensure the existence and global exponential stability of periodic solution of variable-time impulsive neural networks, and to illustrate the same stability properties between variable-time impulsive neural networks and the fixed-time ones. The new criteria are established by applying Schaefer's fixed point theorem combined with the use of inequality technique. Finally, a numerical example is presented to show the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  12. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    PubMed

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  13. A novel neural-wavelet approach for process diagnostics and complex system modeling

    NASA Astrophysics Data System (ADS)

    Gao, Rong

    Neural networks have been effective in several engineering applications because of their learning abilities and robustness. However certain shortcomings, such as slow convergence and local minima, are always associated with neural networks, especially neural networks applied to highly nonlinear and non-stationary problems. These problems can be effectively alleviated by integrating a new powerful tool, wavelets, into conventional neural networks. The multi-resolution analysis and feature localization capabilities of the wavelet transform offer neural networks new possibilities for learning. A neural wavelet network approach developed in this thesis enjoys fast convergence rate with little possibility to be caught at a local minimum. It combines the localization properties of wavelets with the learning abilities of neural networks. Two different testbeds are used for testing the efficiency of the new approach. The first is magnetic flowmeter-based process diagnostics: here we extend previous work, which has demonstrated that wavelet groups contain process information, to more general process diagnostics. A loop at Applied Intelligent Systems Lab (AISL) is used for collecting and analyzing data through the neural-wavelet approach. The research is important for thermal-hydraulic processes in nuclear and other engineering fields. The neural-wavelet approach developed is also tested with data from the electric power grid. More specifically, the neural-wavelet approach is used for performing short-term and mid-term prediction of power load demand. In addition, the feasibility of determining the type of load using the proposed neural wavelet approach is also examined. The notion of cross scale product has been developed as an expedient yet reliable discriminator of loads. Theoretical issues involved in the integration of wavelets and neural networks are discussed and future work outlined.

  14. Dynamic Information Encoding With Dynamic Synapses in Neural Adaptation

    PubMed Central

    Li, Luozheng; Mi, Yuanyuan; Zhang, Wenhao; Wang, Da-Hui; Wu, Si

    2018-01-01

    Adaptation refers to the general phenomenon that the neural system dynamically adjusts its response property according to the statistics of external inputs. In response to an invariant stimulation, neuronal firing rates first increase dramatically and then decrease gradually to a low level close to the background activity. This prompts a question: during the adaptation, how does the neural system encode the repeated stimulation with attenuated firing rates? It has been suggested that the neural system may employ a dynamical encoding strategy during the adaptation, the information of stimulus is mainly encoded by the strong independent spiking of neurons at the early stage of the adaptation; while the weak but synchronized activity of neurons encodes the stimulus information at the later stage of the adaptation. The previous study demonstrated that short-term facilitation (STF) of electrical synapses, which increases the synchronization between neurons, can provide a mechanism to realize dynamical encoding. In the present study, we further explore whether short-term plasticity (STP) of chemical synapses, an interaction form more common than electrical synapse in the cortex, can support dynamical encoding. We build a large-size network with chemical synapses between neurons. Notably, facilitation of chemical synapses only enhances pair-wise correlations between neurons mildly, but its effect on increasing synchronization of the network can be significant, and hence it can serve as a mechanism to convey the stimulus information. To read-out the stimulus information, we consider that a downstream neuron receives balanced excitatory and inhibitory inputs from the network, so that the downstream neuron only responds to synchronized firings of the network. Therefore, the response of the downstream neuron indicates the presence of the repeated stimulation. Overall, our study demonstrates that STP of chemical synapse can serve as a mechanism to realize dynamical neural encoding. We believe that our study shed lights on the mechanism underlying the efficient neural information processing via adaptation. PMID:29636675

  15. Active Control of Wind-Tunnel Model Aeroelastic Response Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.

    2000-01-01

    NASA Langley Research Center, Hampton, VA 23681 Under a joint research and development effort conducted by the National Aeronautics and Space Administration and The Boeing Company (formerly McDonnell Douglas) three neural-network based control systems were developed and tested. The control systems were experimentally evaluated using a transonic wind-tunnel model in the Langley Transonic Dynamics Tunnel. One system used a neural network to schedule flutter suppression control laws, another employed a neural network in a predictive control scheme, and the third employed a neural network in an inverse model control scheme. All three of these control schemes successfully suppressed flutter to or near the limits of the testing apparatus, and represent the first experimental applications of neural networks to flutter suppression. This paper will summarize the findings of this project.

  16. Modeling Aircraft Wing Loads from Flight Data Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Allen, Michael J.; Dibley, Ryan P.

    2003-01-01

    Neural networks were used to model wing bending-moment loads, torsion loads, and control surface hinge-moments of the Active Aeroelastic Wing (AAW) aircraft. Accurate loads models are required for the development of control laws designed to increase roll performance through wing twist while not exceeding load limits. Inputs to the model include aircraft rates, accelerations, and control surface positions. Neural networks were chosen to model aircraft loads because they can account for uncharacterized nonlinear effects while retaining the capability to generalize. The accuracy of the neural network models was improved by first developing linear loads models to use as starting points for network training. Neural networks were then trained with flight data for rolls, loaded reversals, wind-up-turns, and individual control surface doublets for load excitation. Generalization was improved by using gain weighting and early stopping. Results are presented for neural network loads models of four wing loads and four control surface hinge moments at Mach 0.90 and an altitude of 15,000 ft. An average model prediction error reduction of 18.6 percent was calculated for the neural network models when compared to the linear models. This paper documents the input data conditioning, input parameter selection, structure, training, and validation of the neural network models.

  17. Exponential H(infinity) synchronization of general discrete-time chaotic neural networks with or without time delays.

    PubMed

    Qi, Donglian; Liu, Meiqin; Qiu, Meikang; Zhang, Senlin

    2010-08-01

    This brief studies exponential H(infinity) synchronization of a class of general discrete-time chaotic neural networks with external disturbance. On the basis of the drive-response concept and H(infinity) control theory, and using Lyapunov-Krasovskii (or Lyapunov) functional, state feedback controllers are established to not only guarantee exponential stable synchronization between two general chaotic neural networks with or without time delays, but also reduce the effect of external disturbance on the synchronization error to a minimal H(infinity) norm constraint. The proposed controllers can be obtained by solving the convex optimization problems represented by linear matrix inequalities. Most discrete-time chaotic systems with or without time delays, such as Hopfield neural networks, cellular neural networks, bidirectional associative memory networks, recurrent multilayer perceptrons, Cohen-Grossberg neural networks, Chua's circuits, etc., can be transformed into this general chaotic neural network to be H(infinity) synchronization controller designed in a unified way. Finally, some illustrated examples with their simulations have been utilized to demonstrate the effectiveness of the proposed methods.

  18. Automated implementation of rule-based expert systems with neural networks for time-critical applications

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Huang, Song; Govind, Girish

    1991-01-01

    In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.

  19. Predicting Slag Generation in Sub-Scale Test Motors Using a Neural Network

    NASA Technical Reports Server (NTRS)

    Wiesenberg, Brent

    1999-01-01

    Generation of slag (aluminum oxide) is an important issue for the Reusable Solid Rocket Motor (RSRM). Thiokol performed testing to quantify the relationship between raw material variations and slag generation in solid propellants by testing sub-scale motors cast with propellant containing various combinations of aluminum fuel and ammonium perchlorate (AP) oxidizer particle sizes. The test data were analyzed using statistical methods and an artificial neural network. This paper primarily addresses the neural network results with some comparisons to the statistical results. The neural network showed that the particle sizes of both the aluminum and unground AP have a measurable effect on slag generation. The neural network analysis showed that aluminum particle size is the dominant driver in slag generation, about 40% more influential than AP. The network predictions of the amount of slag produced during firing of sub-scale motors were 16% better than the predictions of a statistically derived empirical equation. Another neural network successfully characterized the slag generated during full-scale motor tests. The success is attributable to the ability of neural networks to characterize multiple complex factors including interactions that affect slag generation.

  20. Application of Two-Dimensional AWE Algorithm in Training Multi-Dimensional Neural Network Model

    DTIC Science & Technology

    2003-07-01

    hybrid scheme . the general neural network method (Table 3.1). The training process of the software- ACKNOWLEDGMENT "Neuralmodeler" is shown in Fig. 3.2...engineering. Artificial neural networks (ANNs) have emerged Training a neural network model is the key of as a powerful technique for modeling general neural...coefficients am, the derivatives method of moments (MoM). The variables in the of matrix I have to be generated . A closed form model are frequency

  1. Center for Neural Engineering at Tennessee State University, ASSERT Annual Progress Report.

    DTIC Science & Technology

    1995-07-01

    neural networks . Their research topics are: (1) developing frequency dependent oscillatory neural networks ; (2) long term pontentiation learning rules...as applied to spatial navigation; (3) design and build a servo joint robotic arm and (4) neural network based prothesis control. One graduate student

  2. A Feasibility Study of Synthesizing Subsurfaces Modeled with Computational Neural Networks

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Housner, Jerrold M.; Szewczyk, Z. Peter

    1998-01-01

    This paper investigates the feasibility of synthesizing substructures modeled with computational neural networks. Substructures are modeled individually with computational neural networks and the response of the assembled structure is predicted by synthesizing the neural networks. A superposition approach is applied to synthesize models for statically determinate substructures while an interface displacement collocation approach is used to synthesize statically indeterminate substructure models. Beam and plate substructures along with components of a complicated Next Generation Space Telescope (NGST) model are used in this feasibility study. In this paper, the limitations and difficulties of synthesizing substructures modeled with neural networks are also discussed.

  3. Optical-Correlator Neural Network Based On Neocognitron

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  4. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  5. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  6. Neural networks for function approximation in nonlinear control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.; Stengel, Robert F.

    1990-01-01

    Two neural network architectures are compared with a classical spline interpolation technique for the approximation of functions useful in a nonlinear control system. A standard back-propagation feedforward neural network and a cerebellar model articulation controller (CMAC) neural network are presented, and their results are compared with a B-spline interpolation procedure that is updated using recursive least-squares parameter identification. Each method is able to accurately represent a one-dimensional test function. Tradeoffs between size requirements, speed of operation, and speed of learning indicate that neural networks may be practical for identification and adaptation in a nonlinear control environment.

  7. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1997-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  8. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1998-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  9. A common neural hub resolves syntactic and non-syntactic conflict through cooperation with task-specific networks

    PubMed Central

    Hsu, Nina S.; Jaeggi, Susanne M.; Novick, Jared M.

    2017-01-01

    Regions within the left inferior frontal gyrus (LIFG) have simultaneously been implicated in syntactic processing and cognitive control. Accounts attempting to unify LIFG’s function hypothesize that, during comprehension, cognitive control resolves conflict between incompatible representations of sentence meaning. Some studies demonstrate co-localized activity within LIFG for syntactic and non-syntactic conflict resolution, suggesting domain-generality, but others show non-overlapping activity, suggesting domain-specific cognitive control and/or regions that respond uniquely to syntax. We propose however that examining exclusive activation sites for certain contrasts creates a false dichotomy: both domain-general and domain-specific neural machinery must coordinate to facilitate conflict resolution across domains. Here, subjects completed four diverse tasks involving conflict —one syntactic, three non-syntactic— while undergoing fMRI. Though LIFG consistently activated within individuals during conflict processing, functional connectivity analyses revealed task-specific coordination with distinct brain networks. Thus, LIFG may function as a conflict-resolution “hub” that cooperates with specialized neural systems according to information content. PMID:28110105

  10. Configurable analog-digital conversion using the neural engineering framework

    PubMed Central

    Mayr, Christian G.; Partzsch, Johannes; Noack, Marko; Schüffny, Rene

    2014-01-01

    Efficient Analog-Digital Converters (ADC) are one of the mainstays of mixed-signal integrated circuit design. Besides the conventional ADCs used in mainstream ICs, there have been various attempts in the past to utilize neuromorphic networks to accomplish an efficient crossing between analog and digital domains, i.e., to build neurally inspired ADCs. Generally, these have suffered from the same problems as conventional ADCs, that is they require high-precision, handcrafted analog circuits and are thus not technology portable. In this paper, we present an ADC based on the Neural Engineering Framework (NEF). It carries out a large fraction of the overall ADC process in the digital domain, i.e., it is easily portable across technologies. The analog-digital conversion takes full advantage of the high degree of parallelism inherent in neuromorphic networks, making for a very scalable ADC. In addition, it has a number of features not commonly found in conventional ADCs, such as a runtime reconfigurability of the ADC sampling rate, resolution and transfer characteristic. PMID:25100933

  11. An insect-inspired model for visual binding I: learning objects and their characteristics.

    PubMed

    Northcutt, Brandon D; Dyhr, Jonathan P; Higgins, Charles M

    2017-04-01

    Visual binding is the process of associating the responses of visual interneurons in different visual submodalities all of which are responding to the same object in the visual field. Recently identified neuropils in the insect brain termed optic glomeruli reside just downstream of the optic lobes and have an internal organization that could support visual binding. Working from anatomical similarities between optic and olfactory glomeruli, we have developed a model of visual binding based on common temporal fluctuations among signals of independent visual submodalities. Here we describe and demonstrate a neural network model capable both of refining selectivity of visual information in a given visual submodality, and of associating visual signals produced by different objects in the visual field by developing inhibitory neural synaptic weights representing the visual scene. We also show that this model is consistent with initial physiological data from optic glomeruli. Further, we discuss how this neural network model may be implemented in optic glomeruli at a neuronal level.

  12. Super-linear Precision in Simple Neural Population Codes

    NASA Astrophysics Data System (ADS)

    Schwab, David; Fiete, Ila

    2015-03-01

    A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.

  13. Neural networks for vertical microcode compaction

    NASA Astrophysics Data System (ADS)

    Chu, Pong P.

    1992-09-01

    Neural networks provide an alternative way to solve complex optimization problems. Instead of performing a program of instructions sequentially as in a traditional computer, neural network model explores many competing hypotheses simultaneously using its massively parallel net. The paper shows how to use the neural network approach to perform vertical micro-code compaction for a micro-programmed control unit. The compaction procedure includes two basic steps. The first step determines the compatibility classes and the second step selects a minimal subset to cover the control signals. Since the selection process is an NP- complete problem, to find an optimal solution is impractical. In this study, we employ a customized neural network to obtain the minimal subset. We first formalize this problem, and then define an `energy function' and map it to a two-layer fully connected neural network. The modified network has two types of neurons and can always obtain a valid solution.

  14. The neural basis of visual word form processing: a multivariate investigation.

    PubMed

    Nestor, Adrian; Behrmann, Marlene; Plaut, David C

    2013-07-01

    Current research on the neurobiological bases of reading points to the privileged role of a ventral cortical network in visual word processing. However, the properties of this network and, in particular, its selectivity for orthographic stimuli such as words and pseudowords remain topics of significant debate. Here, we approached this issue from a novel perspective by applying pattern-based analyses to functional magnetic resonance imaging data. Specifically, we examined whether, where and how, orthographic stimuli elicit distinct patterns of activation in the human cortex. First, at the category level, multivariate mapping found extensive sensitivity throughout the ventral cortex for words relative to false-font strings. Secondly, at the identity level, the multi-voxel pattern classification provided direct evidence that different pseudowords are encoded by distinct neural patterns. Thirdly, a comparison of pseudoword and face identification revealed that both stimulus types exploit common neural resources within the ventral cortical network. These results provide novel evidence regarding the involvement of the left ventral cortex in orthographic stimulus processing and shed light on its selectivity and discriminability profile. In particular, our findings support the existence of sublexical orthographic representations within the left ventral cortex while arguing for the continuity of reading with other visual recognition skills.

  15. Artificial Neural Network-Based Early-Age Concrete Strength Monitoring Using Dynamic Response Signals.

    PubMed

    Kim, Junkyeong; Lee, Chaggil; Park, Seunghee

    2017-06-07

    Concrete is one of the most common materials used to construct a variety of civil infrastructures. However, since concrete might be susceptible to brittle fracture, it is essential to confirm the strength of concrete at the early-age stage of the curing process to prevent unexpected collapse. To address this issue, this study proposes a novel method to estimate the early-age strength of concrete, by integrating an artificial neural network algorithm with a dynamic response measurement of the concrete material. The dynamic response signals of the concrete, including both electromechanical impedances and guided ultrasonic waves, are obtained from an embedded piezoelectric sensor module. The cross-correlation coefficient of the electromechanical impedance signals and the amplitude of the guided ultrasonic wave signals are selected to quantify the variation in dynamic responses according to the strength of the concrete. Furthermore, an artificial neural network algorithm is used to verify a relationship between the variation in dynamic response signals and concrete strength. The results of an experimental study confirm that the proposed approach can be effectively applied to estimate the strength of concrete material from the early-age stage of the curing process.

  16. Convolutional neural network for high-accuracy functional near-infrared spectroscopy in a brain-computer interface: three-class classification of rest, right-, and left-hand motor execution.

    PubMed

    Trakoolwilaiwan, Thanawin; Behboodi, Bahareh; Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong

    2018-01-01

    The aim of this work is to develop an effective brain-computer interface (BCI) method based on functional near-infrared spectroscopy (fNIRS). In order to improve the performance of the BCI system in terms of accuracy, the ability to discriminate features from input signals and proper classification are desired. Previous studies have mainly extracted features from the signal manually, but proper features need to be selected carefully. To avoid performance degradation caused by manual feature selection, we applied convolutional neural networks (CNNs) as the automatic feature extractor and classifier for fNIRS-based BCI. In this study, the hemodynamic responses evoked by performing rest, right-, and left-hand motor execution tasks were measured on eight healthy subjects to compare performances. Our CNN-based method provided improvements in classification accuracy over conventional methods employing the most commonly used features of mean, peak, slope, variance, kurtosis, and skewness, classified by support vector machine (SVM) and artificial neural network (ANN). Specifically, up to 6.49% and 3.33% improvement in classification accuracy was achieved by CNN compared with SVM and ANN, respectively.

  17. Artificial Neural Network-Based Early-Age Concrete Strength Monitoring Using Dynamic Response Signals

    PubMed Central

    Kim, Junkyeong; Lee, Chaggil; Park, Seunghee

    2017-01-01

    Concrete is one of the most common materials used to construct a variety of civil infrastructures. However, since concrete might be susceptible to brittle fracture, it is essential to confirm the strength of concrete at the early-age stage of the curing process to prevent unexpected collapse. To address this issue, this study proposes a novel method to estimate the early-age strength of concrete, by integrating an artificial neural network algorithm with a dynamic response measurement of the concrete material. The dynamic response signals of the concrete, including both electromechanical impedances and guided ultrasonic waves, are obtained from an embedded piezoelectric sensor module. The cross-correlation coefficient of the electromechanical impedance signals and the amplitude of the guided ultrasonic wave signals are selected to quantify the variation in dynamic responses according to the strength of the concrete. Furthermore, an artificial neural network algorithm is used to verify a relationship between the variation in dynamic response signals and concrete strength. The results of an experimental study confirm that the proposed approach can be effectively applied to estimate the strength of concrete material from the early-age stage of the curing process. PMID:28590456

  18. Common neural systems associated with the recognition of famous faces and names: An event-related fMRI study

    PubMed Central

    Nielson, Kristy A.; Seidenberg, Michael; Woodard, John L.; Durgerian, Sally; Zhang, Qi; Gross, William L.; Gander, Amelia; Guidotti, Leslie M.; Antuono, Piero; Rao, Stephen M.

    2010-01-01

    Person recognition can be accomplished through several modalities (face, name, voice). Lesion, neurophysiology and neuroimaging studies have been conducted in an attempt to determine the similarities and differences in the neural networks associated with person identity via different modality inputs. The current study used event-related functional-MRI in 17 healthy participants to directly compare activation in response to randomly presented famous and non-famous names and faces (25 stimuli in each of the four categories). Findings indicated distinct areas of activation that differed for faces and names in regions typically associated with pre-semantic perceptual processes. In contrast, overlapping brain regions were activated in areas associated with the retrieval of biographical knowledge and associated social affective features. Specifically, activation for famous faces was primarily right lateralized and famous names were left lateralized. However, for both stimuli, similar areas of bilateral activity were observed in the early phases of perceptual processing. Activation for fame, irrespective of stimulus modality, activated an extensive left hemisphere network, with bilateral activity observed in the hippocampi, posterior cingulate, and middle temporal gyri. Findings are discussed within the framework of recent proposals concerning the neural network of person identification. PMID:20167415

  19. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  20. Advances in Artificial Neural Networks - Methodological Development and Application

    USDA-ARS?s Scientific Manuscript database

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  1. A Fast Neural Network Approach to Predict Lung Tumor Motion during Respiration for Radiation Therapy Applications

    PubMed Central

    Slama, Matous; Benes, Peter M.; Bila, Jiri

    2015-01-01

    During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time. PMID:25893194

  2. A fast neural network approach to predict lung tumor motion during respiration for radiation therapy applications.

    PubMed

    Bukovsky, Ivo; Homma, Noriyasu; Ichiji, Kei; Cejnek, Matous; Slama, Matous; Benes, Peter M; Bila, Jiri

    2015-01-01

    During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time.

  3. Evolution of the VEGF-regulated vascular network from a neural guidance system.

    PubMed

    Ponnambalam, Sreenivasan; Alberghina, Mario

    2011-06-01

    The vascular network is closely linked to the neural system, and an interdependence is displayed in healthy and in pathophysiological responses. How has close apposition of two such functionally different systems occurred? Here, we present a hypothesis for the evolution of the vascular network from an ancestral neural guidance system. Biological cornerstones of this hypothesis are the vascular endothelial growth factor (VEGF) protein family and cognate receptors. The primary sequences of such proteins are conserved from invertebrates, such as worms and flies that lack discernible vascular systems compared to mammals, but all these systems have sophisticated neuronal wiring involving such molecules. Ancestral VEGFs and receptors (VEGFRs) could have been used to develop and maintain the nervous system in primitive eukaryotes. During evolution, the demands of increased morphological complexity required systems for transporting molecules and cells, i.e., biological conductive tubes. We propose that the VEGF-VEGFR axis was subverted by evolution to mediate the formation of biological tubes necessary for transport of fluids, e.g., blood. Increasingly, there is evidence that aberrant VEGF-mediated responses are also linked to neuronal dysfunctions ranging from motor neuron disease, stroke, Parkinson's disease, Alzheimer's disease, ischemic brain disease, epilepsy, multiple sclerosis, and neuronal repair after injury, as well as common vascular diseases (e.g., retinal disease). Manipulation and correction of the VEGF response in different neural tissues could be an effective strategy to treat different neurological diseases.

  4. Artificial Neural Network Metamodels of Stochastic Computer Simulations

    DTIC Science & Technology

    1994-08-10

    SUBTITLE r 5. FUNDING NUMBERS Artificial Neural Network Metamodels of Stochastic I () Computer Simulations 6. AUTHOR(S) AD- A285 951 Robert Allen...8217!298*1C2 ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC COMPUTER SIMULATIONS by Robert Allen Kilmer B.S. in Education Mathematics, Indiana...dedicate this document to the memory of my father, William Ralph Kilmer. mi ABSTRACT Signature ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC

  5. A Balanced Accuracy Fitness Function Leads to Robust Analysis Using Grammatical Evolution Neural Networks in the Case of Class Imbalance

    EPA Science Inventory

    The identification and characterization of genetic and environmental factors that predict common, complex disease is a major goal of human genetics. The ubiquitous nature of epistatic interaction in the underlying genetic etiology of such disease presents a difficult analytical ...

  6. Artificial Intelligence in Business: Technocrat Jargon or Quantum Leap?

    ERIC Educational Resources Information Center

    Burford, Anna M.; Wilson, Harold O.

    This paper addresses the characteristics and applications of artificial intelligence (AI) as a subsection of computer science, and briefly describes the most common types of AI programs: expert systems, natural language, and neural networks. Following a brief presentation of the historical background, the discussion turns to an explanation of how…

  7. A Markov Model Analysis of Problem-Solving Progress.

    ERIC Educational Resources Information Center

    Vendlinski, Terry

    This study used a computerized simulation and problem-solving tool along with artificial neural networks (ANN) as pattern recognizers to identify the common types of strategies high school and college undergraduate chemistry students would use to solve qualitative chemistry problems. Participants were 134 high school chemistry students who used…

  8. Neural basis of nonanalytical reasoning expertise during clinical evaluation.

    PubMed

    Durning, Steven J; Costanzo, Michelle E; Artino, Anthony R; Graner, John; van der Vleuten, Cees; Beckman, Thomas J; Wittich, Christopher M; Roy, Michael J; Holmboe, Eric S; Schuwirth, Lambert

    2015-03-01

    Understanding clinical reasoning is essential for patient care and medical education. Dual-processing theory suggests that nonanalytic reasoning is an essential aspect of expertise; however, assessing nonanalytic reasoning is challenging because it is believed to occur on the subconscious level. This assumption makes concurrent verbal protocols less reliable assessment tools. Functional magnetic resonance imaging was used to explore the neural basis of nonanalytic reasoning in internal medicine interns (novices) and board-certified staff internists (experts) while completing United States Medical Licensing Examination and American Board of Internal Medicine multiple-choice questions. The results demonstrated that novices and experts share a common neural network in addition to nonoverlapping neural resources. However, experts manifested greater neural processing efficiency in regions such as the prefrontal cortex during nonanalytical reasoning. These findings reveal a multinetwork system that supports the dual-process mode of expert clinical reasoning during medical evaluation.

  9. Research on wind field algorithm of wind lidar based on BP neural network and grey prediction

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei

    2018-01-01

    This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.

  10. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    PubMed

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.

  11. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    PubMed Central

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  12. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Optical neural networks based on holographic correlators

    NASA Astrophysics Data System (ADS)

    Sokolov, V. K.; Shubnikov, E. I.

    1995-10-01

    The three most important models of neural networks — a bidirectional associative memory, Hopfield networks, and adaptive resonance networks — are used as examples to show that a holographic correlator has its place in the neural computing paradigm.

  13. Comparison of artificial intelligence classifiers for SIP attack data

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Slachta, Jiri

    2016-05-01

    Honeypot application is a source of valuable data about attacks on the network. We run several SIP honeypots in various computer networks, which are separated geographically and logically. Each honeypot runs on public IP address and uses standard SIP PBX ports. All information gathered via honeypot is periodically sent to the centralized server. This server classifies all attack data by neural network algorithm. The paper describes optimizations of a neural network classifier, which lower the classification error. The article contains the comparison of two neural network algorithm used for the classification of validation data. The first is the original implementation of the neural network described in recent work; the second neural network uses further optimizations like input normalization or cross-entropy cost function. We also use other implementations of neural networks and machine learning classification algorithms. The comparison test their capabilities on validation data to find the optimal classifier. The article result shows promise for further development of an accurate SIP attack classification engine.

  14. Parallel consensual neural networks.

    PubMed

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  15. Unfolding the neutron spectrum of a NE213 scintillator using artificial neural networks.

    PubMed

    Sharghi Ido, A; Bonyadi, M R; Etaati, G R; Shahriari, M

    2009-10-01

    Artificial neural networks technology has been applied to unfold the neutron spectra from the pulse height distribution measured with NE213 liquid scintillator. Here, both the single and multi-layer perceptron neural network models have been implemented to unfold the neutron spectrum from an Am-Be neutron source. The activation function and the connectivity of the neurons have been investigated and the results have been analyzed in terms of the network's performance. The simulation results show that the neural network that utilizes the Satlins transfer function has the best performance. In addition, omitting the bias connection of the neurons improve the performance of the network. Also, the SCINFUL code is used for generating the response functions in the training phase of the process. Finally, the results of the neural network simulation have been compared with those of the FORIST unfolding code for both (241)Am-Be and (252)Cf neutron sources. The results of neural network are in good agreement with FORIST code.

  16. A neural-network-based model for the dynamic simulation of the tire/suspension system while traversing road irregularities.

    PubMed

    Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano

    2008-09-01

    This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy.

  17. Training and Validating a Deep Convolutional Neural Network for Computer-Aided Detection and Classification of Abnormalities on Frontal Chest Radiographs.

    PubMed

    Cicero, Mark; Bilbily, Alexander; Colak, Errol; Dowdell, Tim; Gray, Bruce; Perampaladas, Kuhan; Barfett, Joseph

    2017-05-01

    Convolutional neural networks (CNNs) are a subtype of artificial neural network that have shown strong performance in computer vision tasks including image classification. To date, there has been limited application of CNNs to chest radiographs, the most frequently performed medical imaging study. We hypothesize CNNs can learn to classify frontal chest radiographs according to common findings from a sufficiently large data set. Our institution's research ethics board approved a single-center retrospective review of 35,038 adult posterior-anterior chest radiographs and final reports performed between 2005 and 2015 (56% men, average age of 56, patient type: 24% inpatient, 39% outpatient, 37% emergency department) with a waiver for informed consent. The GoogLeNet CNN was trained using 3 graphics processing units to automatically classify radiographs as normal (n = 11,702) or into 1 or more of cardiomegaly (n = 9240), consolidation (n = 6788), pleural effusion (n = 7786), pulmonary edema (n = 1286), or pneumothorax (n = 1299). The network's performance was evaluated using receiver operating curve analysis on a test set of 2443 radiographs with the criterion standard being board-certified radiologist interpretation. Using 256 × 256-pixel images as input, the network achieved an overall sensitivity and specificity of 91% with an area under the curve of 0.964 for classifying a study as normal (n = 1203). For the abnormal categories, the sensitivity, specificity, and area under the curve, respectively, were 91%, 91%, and 0.962 for pleural effusion (n = 782), 82%, 82%, and 0.868 for pulmonary edema (n = 356), 74%, 75%, and 0.850 for consolidation (n = 214), 81%, 80%, and 0.875 for cardiomegaly (n = 482), and 78%, 78%, and 0.861 for pneumothorax (n = 167). Current deep CNN architectures can be trained with modest-sized medical data sets to achieve clinically useful performance at detecting and excluding common pathology on chest radiographs.

  18. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  19. Plant Growth Models Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  20. Artificial Neural Network for the Prediction of Chromosomal Abnormalities in Azoospermic Males.

    PubMed

    Akinsal, Emre Can; Haznedar, Bulent; Baydilli, Numan; Kalinli, Adem; Ozturk, Ahmet; Ekmekçioğlu, Oğuz

    2018-02-04

    To evaluate whether an artifical neural network helps to diagnose any chromosomal abnormalities in azoospermic males. The data of azoospermic males attending to a tertiary academic referral center were evaluated retrospectively. Height, total testicular volume, follicle stimulating hormone, luteinising hormone, total testosterone and ejaculate volume of the patients were used for the analyses. In artificial neural network, the data of 310 azoospermics were used as the education and 115 as the test set. Logistic regression analyses and discriminant analyses were performed for statistical analyses. The tests were re-analysed with a neural network. Both logistic regression analyses and artificial neural network predicted the presence or absence of chromosomal abnormalities with more than 95% accuracy. The use of artificial neural network model has yielded satisfactory results in terms of distinguishing patients whether they have any chromosomal abnormality or not.

  1. Synchronization criteria for generalized reaction-diffusion neural networks via periodically intermittent control.

    PubMed

    Gan, Qintao; Lv, Tianshi; Fu, Zhenhua

    2016-04-01

    In this paper, the synchronization problem for a class of generalized neural networks with time-varying delays and reaction-diffusion terms is investigated concerning Neumann boundary conditions in terms of p-norm. The proposed generalized neural networks model includes reaction-diffusion local field neural networks and reaction-diffusion static neural networks as its special cases. By establishing a new inequality, some simple and useful conditions are obtained analytically to guarantee the global exponential synchronization of the addressed neural networks under the periodically intermittent control. According to the theoretical results, the influences of diffusion coefficients, diffusion space, and control rate on synchronization are analyzed. Finally, the feasibility and effectiveness of the proposed methods are shown by simulation examples, and by choosing different diffusion coefficients, diffusion spaces, and control rates, different controlled synchronization states can be obtained.

  2. Global exponential stability of inertial memristor-based neural networks with time-varying delays and impulses.

    PubMed

    Zhang, Wei; Huang, Tingwen; He, Xing; Li, Chuandong

    2017-11-01

    In this study, we investigate the global exponential stability of inertial memristor-based neural networks with impulses and time-varying delays. We construct inertial memristor-based neural networks based on the characteristics of the inertial neural networks and memristor. Impulses with and without delays are considered when modeling the inertial neural networks simultaneously, which are of great practical significance in the current study. Some sufficient conditions are derived under the framework of the Lyapunov stability method, as well as an extended Halanay differential inequality and a new delay impulsive differential inequality, which depend on impulses with and without delays, in order to guarantee the global exponential stability of the inertial memristor-based neural networks. Finally, two numerical examples are provided to illustrate the efficiency of the proposed methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Neural joint control for Space Shuttle Remote Manipulator System

    NASA Technical Reports Server (NTRS)

    Atkins, Mark A.; Cox, Chadwick J.; Lothers, Michael D.; Pap, Robert M.; Thomas, Charles R.

    1992-01-01

    Neural networks are being used to control a robot arm in a telerobotic operation. The concept uses neural networks for both joint and inverse kinematics in a robotic control application. An upper level neural network is trained to learn inverse kinematic mappings. The output, a trajectory, is then fed to the Decentralized Adaptive Joint Controllers. This neural network implementation has shown that the controlled arm recovers from unexpected payload changes while following the reference trajectory. The neural network-based decentralized joint controller is faster, more robust and efficient than conventional approaches. Implementations of this architecture are discussed that would relax assumptions about dynamics, obstacles, and heavy loads. This system is being developed to use with the Space Shuttle Remote Manipulator System.

  4. Application of a neural network for reflectance spectrum classification

    NASA Astrophysics Data System (ADS)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  5. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    PubMed

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  6. Multi-model ensemble hydrological simulation using a BP Neural Network for the upper Yalongjiang River Basin, China

    NASA Astrophysics Data System (ADS)

    Li, Zhanjie; Yu, Jingshan; Xu, Xinyi; Sun, Wenchao; Pang, Bo; Yue, Jiajia

    2018-06-01

    Hydrological models are important and effective tools for detecting complex hydrological processes. Different models have different strengths when capturing the various aspects of hydrological processes. Relying on a single model usually leads to simulation uncertainties. Ensemble approaches, based on multi-model hydrological simulations, can improve application performance over single models. In this study, the upper Yalongjiang River Basin was selected for a case study. Three commonly used hydrological models (SWAT, VIC, and BTOPMC) were selected and used for independent simulations with the same input and initial values. Then, the BP neural network method was employed to combine the results from the three models. The results show that the accuracy of BP ensemble simulation is better than that of the single models.

  7. Nanophotonic particle simulation and inverse design using artificial neural networks.

    PubMed

    Peurifoy, John; Shen, Yichen; Jing, Li; Yang, Yi; Cano-Renteria, Fidel; DeLacy, Brendan G; Joannopoulos, John D; Tegmark, Max; Soljačić, Marin

    2018-06-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical.

  8. Application of Artificial Neural Networks in the Heart Electrical Axis Position Conclusion Modeling

    NASA Astrophysics Data System (ADS)

    Bakanovskaya, L. N.

    2016-08-01

    The article touches upon building of a heart electrical axis position conclusion model using an artificial neural network. The input signals of the neural network are the values of deflections Q, R and S; and the output signal is the value of the heart electrical axis position. Training of the network is carried out by the error propagation method. The test results allow concluding that the created neural network makes a conclusion with a high degree of accuracy.

  9. Enhancement of electrical signaling in neural networks on graphene films.

    PubMed

    Tang, Mingliang; Song, Qin; Li, Ning; Jiang, Ziyun; Huang, Rong; Cheng, Guosheng

    2013-09-01

    One of the key challenges for neural tissue engineering is to exploit supporting materials with robust functionalities not only to govern cell-specific behaviors, but also to form functional neural network. The unique electrical and mechanical properties of graphene imply it as a promising candidate for neural interfaces, but little is known about the details of neural network formation on graphene as a scaffold material for tissue engineering. Therapeutic regenerative strategies aim to guide and enhance the intrinsic capacity of the neurons to reorganize by promoting plasticity mechanisms in a controllable manner. Here, we investigated the impact of graphene on the formation and performance in the assembly of neural networks in neural stem cell (NSC) culture. Using calcium imaging and electrophysiological recordings, we demonstrate the capabilities of graphene to support the growth of functional neural circuits, and improve neural performance and electrical signaling in the network. These results offer a better understanding of interactions between graphene and NSCs, also they clearly present the great potentials of graphene as neural interface in tissue engineering. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. An Example of Unsupervised Networks Kohonen's Self-Organizing Feature Map

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    Kohonen's self-organizing feature map belongs to a class of unsupervised artificial neural network commonly referred to as topographic maps. It serves two purposes, the quantization and dimensionality reduction of date. A short description of its history and its biological context is given. We show that the inherent classification properties of the feature map make it a suitable candidate for solving the classification task in power system areas like load forecasting, fault diagnosis and security assessment.

  11. Inferring general relations between network characteristics from specific network ensembles.

    PubMed

    Cardanobile, Stefano; Pernice, Volker; Deger, Moritz; Rotter, Stefan

    2012-01-01

    Different network models have been suggested for the topology underlying complex interactions in natural systems. These models are aimed at replicating specific statistical features encountered in real-world networks. However, it is rarely considered to which degree the results obtained for one particular network class can be extrapolated to real-world networks. We address this issue by comparing different classical and more recently developed network models with respect to their ability to generate networks with large structural variability. In particular, we consider the statistical constraints which the respective construction scheme imposes on the generated networks. After having identified the most variable networks, we address the issue of which constraints are common to all network classes and are thus suitable candidates for being generic statistical laws of complex networks. In fact, we find that generic, not model-related dependencies between different network characteristics do exist. This makes it possible to infer global features from local ones using regression models trained on networks with high generalization power. Our results confirm and extend previous findings regarding the synchronization properties of neural networks. Our method seems especially relevant for large networks, which are difficult to map completely, like the neural networks in the brain. The structure of such large networks cannot be fully sampled with the present technology. Our approach provides a method to estimate global properties of under-sampled networks in good approximation. Finally, we demonstrate on three different data sets (C. elegans neuronal network, R. prowazekii metabolic network, and a network of synonyms extracted from Roget's Thesaurus) that real-world networks have statistical relations compatible with those obtained using regression models.

  12. Modular representation of layered neural networks.

    PubMed

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Bio-inspired spiking neural network for nonlinear systems control.

    PubMed

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.

    PubMed

    Woodward, Alexander; Froese, Tom; Ikegami, Takashi

    2015-02-01

    The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    NASA Astrophysics Data System (ADS)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  16. Rain/No-Rain Identification from Bispectral Satellite Information using Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Tao, Y.

    2016-12-01

    Satellite-based precipitation estimation products have the advantage of high resolution and global coverage. However, they still suffer from insufficient accuracy. To accurately estimate precipitation from satellite data, there are two most important aspects: sufficient precipitation information in the satellite information and proper methodologies to extract such information effectively. This study applies the state-of-the-art machine learning methodologies to bispectral satellite information for Rain/No-Rain detection. Specifically, we use deep neural networks to extract features from infrared and water vapor channels and connect it to precipitation identification. To evaluate the effectiveness of the methodology, we first applies it to the infrared data only (Model DL-IR only), the most commonly used inputs for satellite-based precipitation estimation. Then we incorporates water vapor data (Model DL-IR + WV) to further improve the prediction performance. Radar stage IV dataset is used as ground measurement for parameter calibration. The operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS), is used as a reference to compare the performance of both models in both winter and summer seasons.The experiments show significant improvement for both models in precipitation identification. The overall performance gains in the Critical Success Index (CSI) are 21.60% and 43.66% over the verification periods for Model DL-IR only and Model DL-IR+WV model compared to PERSIANN-CCS, respectively. Moreover, specific case studies show that the water vapor channel information and the deep neural networks effectively help recover a large number of missing precipitation pixels under warm clouds while reducing false alarms under cold clouds.

  17. Evaluation of supervised machine-learning algorithms to distinguish between inflammatory bowel disease and alimentary lymphoma in cats.

    PubMed

    Awaysheh, Abdullah; Wilcke, Jeffrey; Elvinger, François; Rees, Loren; Fan, Weiguo; Zimmerman, Kurt L

    2016-11-01

    Inflammatory bowel disease (IBD) and alimentary lymphoma (ALA) are common gastrointestinal diseases in cats. The very similar clinical signs and histopathologic features of these diseases make the distinction between them diagnostically challenging. We tested the use of supervised machine-learning algorithms to differentiate between the 2 diseases using data generated from noninvasive diagnostic tests. Three prediction models were developed using 3 machine-learning algorithms: naive Bayes, decision trees, and artificial neural networks. The models were trained and tested on data from complete blood count (CBC) and serum chemistry (SC) results for the following 3 groups of client-owned cats: normal, inflammatory bowel disease (IBD), or alimentary lymphoma (ALA). Naive Bayes and artificial neural networks achieved higher classification accuracy (sensitivities of 70.8% and 69.2%, respectively) than the decision tree algorithm (63%, p < 0.0001). The areas under the receiver-operating characteristic curve for classifying cases into the 3 categories was 83% by naive Bayes, 79% by decision tree, and 82% by artificial neural networks. Prediction models using machine learning provided a method for distinguishing between ALA-IBD, ALA-normal, and IBD-normal. The naive Bayes and artificial neural networks classifiers used 10 and 4 of the CBC and SC variables, respectively, to outperform the C4.5 decision tree, which used 5 CBC and SC variables in classifying cats into the 3 classes. These models can provide another noninvasive diagnostic tool to assist clinicians with differentiating between IBD and ALA, and between diseased and nondiseased cats. © 2016 The Author(s).

  18. Criteria for Choosing the Best Neural Network: Part 1

    DTIC Science & Technology

    1991-07-24

    Touretzky, pp. 177-185. San Mateo: Morgan Kaufmann. Harp, S.A., Samad , T., and Guha, A . (1990). Designing application-specific neural networks using genetic...determining a parsimonious neural network for use in prediction/generalization based on a given fixed learning sample. Both the classification and...statistical settings, algorithms for selecting the number of hidden layer nodes in a three layer, feedforward neural network are presented. The selection

  19. Keypoint Density-Based Region Proposal for Fine-Grained Object Detection and Classification Using Regions with Convolutional Neural Network Features

    DTIC Science & Technology

    2015-12-15

    Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep

  20. Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach.

    DTIC Science & Technology

    1998-05-01

    Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for

  1. Semantic Interpretation of An Artificial Neural Network

    DTIC Science & Technology

    1995-12-01

    ARTIFICIAL NEURAL NETWORK .7,’ THESIS Stanley Dale Kinderknecht Captain, USAF 770 DEAT7ET77,’H IR O C 7... ARTIFICIAL NEURAL NETWORK THESIS Stanley Dale Kinderknecht Captain, USAF AFIT/GCS/ENG/95D-07 Approved for public release; distribution unlimited The views...Government. AFIT/GCS/ENG/95D-07 SEMANTIC INTERPRETATION OF AN ARTIFICIAL NEURAL NETWORK THESIS Presented to the Faculty of the School of Engineering of

  2. Trimaran Resistance Artificial Neural Network

    DTIC Science & Technology

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  3. Application of Fuzzy-Logic Controller and Neural Networks Controller in Gas Turbine Speed Control and Overheating Control and Surge Control on Transient Performance

    NASA Astrophysics Data System (ADS)

    Torghabeh, A. A.; Tousi, A. M.

    2007-08-01

    This paper presents Fuzzy Logic and Neural Networks approach to Gas Turbine Fuel schedules. Modeling of non-linear system using feed forward artificial Neural Networks using data generated by a simulated gas turbine program is introduced. Two artificial Neural Networks are used , depicting the non-linear relationship between gas generator speed and fuel flow, and turbine inlet temperature and fuel flow respectively . Off-line fast simulations are used for engine controller design for turbojet engine based on repeated simulation. The Mamdani and Sugeno models are used to expression the Fuzzy system . The linguistic Fuzzy rules and membership functions are presents and a Fuzzy controller will be proposed to provide an Open-Loop control for the gas turbine engine during acceleration and deceleration . MATLAB Simulink was used to apply the Fuzzy Logic and Neural Networks analysis. Both systems were able to approximate functions characterizing the acceleration and deceleration schedules . Surge and Flame-out avoidance during acceleration and deceleration phases are then checked . Turbine Inlet Temperature also checked and controls by Neural Networks controller. This Fuzzy Logic and Neural Network Controllers output results are validated and evaluated by GSP software . The validation results are used to evaluate the generalization ability of these artificial Neural Networks and Fuzzy Logic controllers.

  4. A research using hybrid RBF/Elman neural networks for intrusion detection system secure model

    NASA Astrophysics Data System (ADS)

    Tong, Xiaojun; Wang, Zhu; Yu, Haining

    2009-10-01

    A hybrid RBF/Elman neural network model that can be employed for both anomaly detection and misuse detection is presented in this paper. The IDSs using the hybrid neural network can detect temporally dispersed and collaborative attacks effectively because of its memory of past events. The RBF network is employed as a real-time pattern classification and the Elman network is employed to restore the memory of past events. The IDSs using the hybrid neural network are evaluated against the intrusion detection evaluation data sponsored by U.S. Defense Advanced Research Projects Agency (DARPA). Experimental results are presented in ROC curves. Experiments show that the IDSs using this hybrid neural network improve the detection rate and decrease the false positive rate effectively.

  5. Advanced obstacle avoidance for a laser based wheelchair using optimised Bayesian neural networks.

    PubMed

    Trieu, Hoang T; Nguyen, Hung T; Willey, Keith

    2008-01-01

    In this paper we present an advanced method of obstacle avoidance for a laser based intelligent wheelchair using optimized Bayesian neural networks. Three neural networks are designed for three separate sub-tasks: passing through a door way, corridor and wall following and general obstacle avoidance. The accurate usable accessible space is determined by including the actual wheelchair dimensions in a real-time map used as inputs to each networks. Data acquisitions are performed separately to collect the patterns required for specified sub-tasks. Bayesian frame work is used to determine the optimal neural network structure in each case. Then these networks are trained under the supervision of Bayesian rule. Experiment results showed that compare to the VFH algorithm our neural networks navigated a smoother path following a near optimum trajectory.

  6. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  7. Ultrasonographic Diagnosis of Cirrhosis Based on Preprocessing Using Pyramid Recurrent Neural Network

    NASA Astrophysics Data System (ADS)

    Lu, Jianming; Liu, Jiang; Zhao, Xueqin; Yahagi, Takashi

    In this paper, a pyramid recurrent neural network is applied to characterize the hepatic parenchymal diseases in ultrasonic B-scan texture. The cirrhotic parenchymal diseases are classified into 4 types according to the size of hypoechoic nodular lesions. The B-mode patterns are wavelet transformed , and then the compressed data are feed into a pyramid neural network to diagnose the type of cirrhotic diseases. Compared with the 3-layer neural networks, the performance of the proposed pyramid recurrent neural network is improved by utilizing the lower layer effectively. The simulation result shows that the proposed system is suitable for diagnosis of cirrhosis diseases.

  8. Application of artificial neural networks to composite ply micromechanics

    NASA Technical Reports Server (NTRS)

    Brown, D. A.; Murthy, P. L. N.; Berke, L.

    1991-01-01

    Artificial neural networks can provide improved computational efficiency relative to existing methods when an algorithmic description of functional relationships is either totally unavailable or is complex in nature. For complex calculations, significant reductions in elapsed computation time are possible. The primary goal is to demonstrate the applicability of artificial neural networks to composite material characterization. As a test case, a neural network was trained to accurately predict composite hygral, thermal, and mechanical properties when provided with basic information concerning the environment, constituent materials, and component ratios used in the creation of the composite. A brief introduction on neural networks is provided along with a description of the project itself.

  9. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  10. Decoding small surface codes with feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  11. Decorrelation of Neural-Network Activity by Inhibitory Feedback

    PubMed Central

    Einevoll, Gaute T.; Diesmann, Markus

    2012-01-01

    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II). PMID:23133368

  12. A Comparison of Conventional Linear Regression Methods and Neural Networks for Forecasting Educational Spending.

    ERIC Educational Resources Information Center

    Baker, Bruce D.; Richards, Craig E.

    1999-01-01

    Applies neural network methods for forecasting 1991-95 per-pupil expenditures in U.S. public elementary and secondary schools. Forecasting models included the National Center for Education Statistics' multivariate regression model and three neural architectures. Regarding prediction accuracy, neural network results were comparable or superior to…

  13. An Artificial Neural Network Controller for Intelligent Transportation Systems Applications

    DOT National Transportation Integrated Search

    1996-01-01

    An Autonomous Intelligent Cruise Control (AICC) has been designed using a feedforward artificial neural network, as an example for utilizing artificial neural networks for nonlinear control problems arising in intelligent transportation systems appli...

  14. Deconvolution using a neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  15. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    PubMed

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Flank wears Simulation by using back propagation neural network when cutting hardened H-13 steel in CNC End Milling

    NASA Astrophysics Data System (ADS)

    Hazza, Muataz Hazza F. Al; Adesta, Erry Y. T.; Riza, Muhammad

    2013-12-01

    High speed milling has many advantages such as higher removal rate and high productivity. However, higher cutting speed increase the flank wear rate and thus reducing the cutting tool life. Therefore estimating and predicting the flank wear length in early stages reduces the risk of unaccepted tooling cost. This research presents a neural network model for predicting and simulating the flank wear in the CNC end milling process. A set of sparse experimental data for finish end milling on AISI H13 at hardness of 48 HRC have been conducted to measure the flank wear length. Then the measured data have been used to train the developed neural network model. Artificial neural network (ANN) was applied to predict the flank wear length. The neural network contains twenty hidden layer with feed forward back propagation hierarchical. The neural network has been designed with MATLAB Neural Network Toolbox. The results show a high correlation between the predicted and the observed flank wear which indicates the validity of the models.

  17. Using an Extended Kalman Filter Learning Algorithm for Feed-Forward Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, David J.; Mussa, Yussuf

    2004-01-01

    In this study a new extended Kalman filter (EKF) learning algorithm for feed-forward neural networks (FFN) is used. With the EKF approach, the training of the FFN can be seen as state estimation for a non-linear stationary process. The EKF method gives excellent convergence performances provided that there is enough computer core memory and that the machine precision is high. Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). The neural network was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9997. The neural network Fortran code used is available for download.

  18. Environmentally induced amplitude death and firing provocation in large-scale networks of neuronal systems

    NASA Astrophysics Data System (ADS)

    Pankratova, Evgeniya V.; Kalyakulina, Alena I.

    2016-12-01

    We study the dynamics of multielement neuronal systems taking into account both the direct interaction between the cells via linear coupling and nondiffusive cell-to-cell communication via common environment. For the cells exhibiting individual bursting behavior, we have revealed the dependence of the network activity on its scale. Particularly, we show that small-scale networks demonstrate the inability to maintain complicated oscillations: for a small number of elements in an ensemble, the phenomenon of amplitude death is observed. The existence of threshold network scales and mechanisms causing firing in artificial and real multielement neural networks, as well as their significance for biological applications, are discussed.

  19. Decoding of finger trajectory from ECoG using deep learning.

    PubMed

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2018-06-01

    Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.

  20. Decoding of finger trajectory from ECoG using deep learning

    NASA Astrophysics Data System (ADS)

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2018-06-01

    Objective. Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. Approach. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. Main results. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. Significance. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.

  1. Neural network classification of clinical neurophysiological data for acute care monitoring

    NASA Technical Reports Server (NTRS)

    Sgro, Joseph

    1994-01-01

    The purpose of neurophysiological monitoring of the 'acute care' patient is to allow the accurate recognition of changing or deteriorating neurological function as close to the moment of occurrence as possible, thus permitting immediate intervention. Results confirm that: (1) neural networks are able to accurately identify electroencephalogram (EEG) patterns and evoked potential (EP) wave components, and measuring EP waveform latencies and amplitudes; (2) neural networks are able to accurately detect EP and EEG recordings that have been contaminated by noise; (3) the best performance was obtained consistently with the back propagation network for EP and the HONN for EEG's; (4) neural network performed consistently better than other methods evaluated; and (5) neural network EEG and EP analyses are readily performed on multichannel data.

  2. Recognition of Telugu characters using neural networks.

    PubMed

    Sukhaswami, M B; Seetharamulu, P; Pujari, A K

    1995-09-01

    The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.

  3. Event-driven contrastive divergence for spiking neuromorphic systems.

    PubMed

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2013-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

  4. Event-driven contrastive divergence for spiking neuromorphic systems

    PubMed Central

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2014-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality. PMID:24574952

  5. Multistability in bidirectional associative memory neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Gan; Cao, Jinde

    2008-04-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2 n-dimensional networks can have 3 equilibria and 2 equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results.

  6. Application of Neural Network Optimized by Mind Evolutionary Computation in Building Energy Prediction

    NASA Astrophysics Data System (ADS)

    Song, Chen; Zhong-Cheng, Wu; Hong, Lv

    2018-03-01

    Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.

  7. Verification and Validation Methodology of Real-Time Adaptive Neural Networks for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Loparo, Kenneth; Mackall, Dale; Schumann, Johann; Soares, Fola

    2004-01-01

    Recent research has shown that adaptive neural based control systems are very effective in restoring stability and control of an aircraft in the presence of damage or failures. The application of an adaptive neural network with a flight critical control system requires a thorough and proven process to ensure safe and proper flight operation. Unique testing tools have been developed as part of a process to perform verification and validation (V&V) of real time adaptive neural networks used in recent adaptive flight control system, to evaluate the performance of the on line trained neural networks. The tools will help in certification from FAA and will help in the successful deployment of neural network based adaptive controllers in safety-critical applications. The process to perform verification and validation is evaluated against a typical neural adaptive controller and the results are discussed.

  8. Pulse Coupled Neural Networks for the Segmentation of Magnetic Resonance Brain Images.

    DTIC Science & Technology

    1996-12-01

    PULSE COUPLED NEURAL NETWORKS FOR THE SEGMENTATION OF MAGNETIC RESONANCE BRAIN IMAGES THESIS Shane Lee Abrahamson First Lieutenant, USAF AFIT/GCS/ENG...COUPLED NEURAL NETWORKS FOR THE SEGMENTATION OF MAGNETIC RESONANCE BRAIN IMAGES THESIS Shane Lee Abrahamson First Lieutenant, USAF AFIT/GCS/ENG/96D-01...research develops an automated method for segmenting Magnetic Resonance (MR) brain images based on Pulse Coupled Neural Networks (PCNN). MR brain image

  9. Angle of Arrival Detection Through Artificial Neural Network Analysis of Optical Fiber Intensity Patterns

    DTIC Science & Technology

    1990-12-01

    ARTIFICIAL NEURAL NETWORK ANALYSIS OF OPTICAL FIBER INTENSITY PATTERNS THESIS Scott Thomas Captain, USAF AFIT/GE/ENG/90D-62 DTIC...ELECTE ao • JAN08 1991 Approved for public release; distribution unlimited. AFIT/GE/ENG/90D-62 ANGLE OF ARRIVAL DETECTION THROUGH ARTIFICIAL NEURAL NETWORK ANALYSIS... ARTIFICIAL NEURAL NETWORK ANALYSIS OF OPTICAL FIBER INTENSITY PATTERNS L Introduction The optical sensors of United States Air Force reconnaissance

  10. Reconfigurable Flight Control Design using a Robust Servo LQR and Radial Basis Function Neural Networks

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    2005-01-01

    This viewgraph presentation reviews the use of a Robust Servo Linear Quadratic Regulator (LQR) and a Radial Basis Function (RBF) Neural Network in reconfigurable flight control designs in adaptation to a aircraft part failure. The method uses a robust LQR servomechanism design with model Reference adaptive control, and RBF neural networks. During the failure the LQR servomechanism behaved well, and using the neural networks improved the tracking.

  11. Bio-Inspired Computation: Clock-Free, Grid-Free, Scale-Free and Symbol Free

    DTIC Science & Technology

    2015-06-11

    for Prediction Tasks in Spiking Neural Networks ." Artificial Neural Networks and Machine Learning–ICANN 2014. Springer, 2014. pp 635-642. Gibson, T...Henderson, JA and Wiles, J. "Predicting temporal sequences using an event-based spiking neural network incorporating learnable delays." IEEE...Adelaide (2014 Jan). Gibson, T and Wiles, J "Predicting temporal sequences using an event-based spiking neural network incorporating learnable delays" at

  12. A neural network architecture for implementation of expert systems for real time monitoring

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.

    1991-01-01

    Since neural networks have the advantages of massive parallelism and simple architecture, they are good tools for implementing real time expert systems. In a rule based expert system, the antecedents of rules are in the conjunctive or disjunctive form. We constructed a multilayer feedforward type network in which neurons represent AND or OR operations of rules. Further, we developed a translator which can automatically map a given rule base into the network. Also, we proposed a new and powerful yet flexible architecture that combines the advantages of both fuzzy expert systems and neural networks. This architecture uses the fuzzy logic concepts to separate input data domains into several smaller and overlapped regions. Rule-based expert systems for time critical applications using neural networks, the automated implementation of rule-based expert systems with neural nets, and fuzzy expert systems vs. neural nets are covered.

  13. Stability and synchronization analysis of inertial memristive neural networks with time delays.

    PubMed

    Rakkiyappan, R; Premalatha, S; Chandrasekar, A; Cao, Jinde

    2016-10-01

    This paper is concerned with the problem of stability and pinning synchronization of a class of inertial memristive neural networks with time delay. In contrast to general inertial neural networks, inertial memristive neural networks is applied to exhibit the synchronization and stability behaviors due to the physical properties of memristors and the differential inclusion theory. By choosing an appropriate variable transmission, the original system can be transformed into first order differential equations. Then, several sufficient conditions for the stability of inertial memristive neural networks by using matrix measure and Halanay inequality are derived. These obtained criteria are capable of reducing computational burden in the theoretical part. In addition, the evaluation is done on pinning synchronization for an array of linearly coupled inertial memristive neural networks, to derive the condition using matrix measure strategy. Finally, the two numerical simulations are presented to show the effectiveness of acquired theoretical results.

  14. A comparison of neural network architectures for the prediction of MRR in EDM

    NASA Astrophysics Data System (ADS)

    Jena, A. R.; Das, Raja

    2017-11-01

    The aim of the research work is to predict the material removal rate of a work-piece in electrical discharge machining (EDM). Here, an effort has been made to predict the material removal rate through back-propagation neural network (BPN) and radial basis function neural network (RBFN) for a work-piece of AISI D2 steel. The input parameters for the architecture are discharge-current (Ip), pulse-duration (Ton), and duty-cycle (τ) taken for consideration to obtained the output for material removal rate of the work-piece. In the architecture, it has been observed that radial basis function neural network is comparatively faster than back-propagation neural network but logically back-propagation neural network results more real value. Therefore BPN may consider as a better process in this architecture for consistent prediction to save time and money for conducting experiments.

  15. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    DOE PAGES

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-10-18

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property.more » Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.« less

  16. Synchronization of Switched Neural Networks With Communication Delays via the Event-Triggered Control.

    PubMed

    Wen, Shiping; Zeng, Zhigang; Chen, Michael Z Q; Huang, Tingwen

    2017-10-01

    This paper addresses the issue of synchronization of switched delayed neural networks with communication delays via event-triggered control. For synchronizing coupled switched neural networks, we propose a novel event-triggered control law which could greatly reduce the number of control updates for synchronization tasks of coupled switched neural networks involving embedded microprocessors with limited on-board resources. The control signals are driven by properly defined events, which depend on the measurement errors and current-sampled states. By using a delay system method, a novel model of synchronization error system with delays is proposed with the communication delays and event-triggered control in the unified framework for coupled switched neural networks. The criteria are derived for the event-triggered synchronization analysis and control synthesis of switched neural networks via the Lyapunov-Krasovskii functional method and free weighting matrix approach. A numerical example is elaborated on to illustrate the effectiveness of the derived results.

  17. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property.more » Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.« less

  18. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    PubMed

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  19. Application of two neural network paradigms to the study of voluntary employee turnover.

    PubMed

    Somers, M J

    1999-04-01

    Two neural network paradigms--multilayer perceptron and learning vector quantization--were used to study voluntary employee turnover with a sample of 577 hospital employees. The objectives of the study were twofold. The 1st was to assess whether neural computing techniques offered greater predictive accuracy than did conventional turnover methodologies. The 2nd was to explore whether computer models of turnover based on neural network technologies offered new insights into turnover processes. When compared with logistic regression analysis, both neural network paradigms provided considerably more accurate predictions of turnover behavior, particularly with respect to the correct classification of leavers. In addition, these neural network paradigms captured nonlinear relationships that are relevant for theory development. Results are discussed in terms of their implications for future research.

  20. Polarity-specific high-level information propagation in neural networks.

    PubMed

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  1. Polarity-specific high-level information propagation in neural networks

    PubMed Central

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals. PMID:24672472

  2. A neuromorphic network for generic multivariate data classification

    PubMed Central

    Schmuker, Michael; Pfeil, Thomas; Nawrot, Martin Paul

    2014-01-01

    Computational neuroscience has uncovered a number of computational principles used by nervous systems. At the same time, neuromorphic hardware has matured to a state where fast silicon implementations of complex neural networks have become feasible. En route to future technical applications of neuromorphic computing the current challenge lies in the identification and implementation of functional brain algorithms. Taking inspiration from the olfactory system of insects, we constructed a spiking neural network for the classification of multivariate data, a common problem in signal and data analysis. In this model, real-valued multivariate data are converted into spike trains using “virtual receptors” (VRs). Their output is processed by lateral inhibition and drives a winner-take-all circuit that supports supervised learning. VRs are conveniently implemented in software, whereas the lateral inhibition and classification stages run on accelerated neuromorphic hardware. When trained and tested on real-world datasets, we find that the classification performance is on par with a naïve Bayes classifier. An analysis of the network dynamics shows that stable decisions in output neuron populations are reached within less than 100 ms of biological time, matching the time-to-decision reported for the insect nervous system. Through leveraging a population code, the network tolerates the variability of neuronal transfer functions and trial-to-trial variation that is inevitably present on the hardware system. Our work provides a proof of principle for the successful implementation of a functional spiking neural network on a configurable neuromorphic hardware system that can readily be applied to real-world computing problems. PMID:24469794

  3. Devices and circuits for nanoelectronic implementation of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Turel, Ozgur

    Biological neural networks perform complicated information processing tasks at speeds better than conventional computers based on conventional algorithms. This has inspired researchers to look into the way these networks function, and propose artificial networks that mimic their behavior. Unfortunately, most artificial neural networks, either software or hardware, do not provide either the speed or the complexity of a human brain. Nanoelectronics, with high density and low power dissipation that it provides, may be used in developing more efficient artificial neural networks. This work consists of two major contributions in this direction. First is the proposal of the CMOL concept, hybrid CMOS-molecular hardware [1-8]. CMOL may circumvent most of the problems in posed by molecular devices, such as low yield, vet provide high active device density, ˜1012/cm 2. The second contribution is CrossNets, artificial neural networks that are based on CMOL. We showed that CrossNets, with their fault tolerance, exceptional speed (˜ 4 to 6 orders of magnitude faster than biological neural networks) can perform any task any artificial neural network can perform. Moreover, there is a hope that if their integration scale is increased to that of human cerebral cortex (˜ 1010 neurons and ˜ 1014 synapses), they may be capable of performing more advanced tasks.

  4. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    PubMed

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Forecasting PM10 in Algiers: efficacy of multilayer perceptron networks.

    PubMed

    Abderrahim, Hamza; Chellali, Mohammed Reda; Hamou, Ahmed

    2016-01-01

    Air quality forecasting system has acquired high importance in atmospheric pollution due to its negative impacts on the environment and human health. The artificial neural network is one of the most common soft computing methods that can be pragmatic for carving such complex problem. In this paper, we used a multilayer perceptron neural network to forecast the daily averaged concentration of the respirable suspended particulates with aerodynamic diameter of not more than 10 μm (PM10) in Algiers, Algeria. The data for training and testing the network are based on the data sampled from 2002 to 2006 collected by SAMASAFIA network center at El Hamma station. The meteorological data, air temperature, relative humidity, and wind speed, are used as inputs network parameters in the formation of model. The training patterns used correspond to 41 days data. The performance of the developed models was evaluated on the basis index of agreement and other statistical parameters. It was seen that the overall performance of model with 15 neurons is better than the ones with 5 and 10 neurons. The results of multilayer network with as few as one hidden layer and 15 neurons were quite reasonable than the ones with 5 and 10 neurons. Finally, an error around 9% has been reached.

  6. Artificial neural network intelligent method for prediction

    NASA Astrophysics Data System (ADS)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  7. Seismic signal auto-detecing from different features by using Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Zhou, Y.; Yue, H.; Zhou, S.

    2017-12-01

    We try Convolutional Neural Network to detect some features of seismic data and compare their efficience. The features include whether a signal is seismic signal or noise and the arrival time of P and S phase and each feature correspond to a Convolutional Neural Network. We first use traditional STA/LTA to recongnize some events and then use templete matching to find more events as training set for the Neural Network. To make the training set more various, we add some noise to the seismic data and make some synthetic seismic data and noise. The 3-component raw signal and time-frequancy ananlyze are used as the input data for our neural network. Our Training is performed on GPUs to achieve efficient convergence. Our method improved the precision in comparison with STA/LTA and template matching. We will move to recurrent neural network to see if this kind network is better in detect P and S phase.

  8. Geometrical structure of Neural Networks: Geodesics, Jeffrey's Prior and Hyper-ribbons

    NASA Astrophysics Data System (ADS)

    Hayden, Lorien; Alemi, Alex; Sethna, James

    2014-03-01

    Neural networks are learning algorithms which are employed in a host of Machine Learning problems including speech recognition, object classification and data mining. In practice, neural networks learn a low dimensional representation of high dimensional data and define a model manifold which is an embedding of this low dimensional structure in the higher dimensional space. In this work, we explore the geometrical structure of a neural network model manifold. A Stacked Denoising Autoencoder and a Deep Belief Network are trained on handwritten digits from the MNIST database. Construction of geodesics along the surface and of slices taken from the high dimensional manifolds reveal a hierarchy of widths corresponding to a hyper-ribbon structure. This property indicates that neural networks fall into the class of sloppy models, in which certain parameter combinations dominate the behavior. Employing this information could prove valuable in designing both neural network architectures and training algorithms. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No . DGE-1144153.

  9. Neural networks within multi-core optic fibers

    PubMed Central

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-01-01

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks. PMID:27383911

  10. Neural networks within multi-core optic fibers.

    PubMed

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  11. Method of gear fault diagnosis based on EEMD and improved Elman neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Zhao, Wei; Xiao, Shungen; Song, Mengmeng

    2017-05-01

    Aiming at crack and wear and so on of gears Fault information is difficult to diagnose usually due to its weak, a gear fault diagnosis method that is based on EEMD and improved Elman neural network fusion is proposed. A number of IMF components are obtained by decomposing denoised all kinds of fault signals with EEMD, and the pseudo IMF components is eliminated by using the correlation coefficient method to obtain the effective IMF component. The energy characteristic value of each effective component is calculated as the input feature quantity of Elman neural network, and the improved Elman neural network is based on standard network by adding a feedback factor. The fault data of normal gear, broken teeth, cracked gear and attrited gear were collected by field collecting. The results were analyzed by the diagnostic method proposed in this paper. The results show that compared with the standard Elman neural network, Improved Elman neural network has the advantages of high diagnostic efficiency.

  12. Optimal exponential synchronization of general chaotic delayed neural networks: an LMI approach.

    PubMed

    Liu, Meiqin

    2009-09-01

    This paper investigates the optimal exponential synchronization problem of general chaotic neural networks with or without time delays by virtue of Lyapunov-Krasovskii stability theory and the linear matrix inequality (LMI) technique. This general model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, and recurrent multilayer perceptrons (RMLPs) with or without delays. Using the drive-response concept, time-delay feedback controllers are designed to synchronize two identical chaotic neural networks as quickly as possible. The control design equations are shown to be a generalized eigenvalue problem (GEVP) which can be easily solved by various convex optimization algorithms to determine the optimal control law and the optimal exponential synchronization rate. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.

  13. Recognition of abstract objects via neural oscillators: interaction among topological organization, associative memory and gamma band synchronization.

    PubMed

    Ursino, Mauro; Magosso, Elisa; Cuppini, Cristiano

    2009-02-01

    Synchronization of neural activity in the gamma band is assumed to play a significant role not only in perceptual processing, but also in higher cognitive functions. Here, we propose a neural network of Wilson-Cowan oscillators to simulate recognition of abstract objects, each represented as a collection of four features. Features are ordered in topological maps of oscillators connected via excitatory lateral synapses, to implement a similarity principle. Experience on previous objects is stored in long-range synapses connecting the different topological maps, and trained via timing dependent Hebbian learning (previous knowledge principle). Finally, a downstream decision network detects the presence of a reliable object representation, when all features are oscillating in synchrony. Simulations performed giving various simultaneous objects to the network (from 1 to 4), with some missing and/or modified properties suggest that the network can reconstruct objects, and segment them from the other simultaneously present objects, even in case of deteriorated information, noise, and moderate correlation among the inputs (one common feature). The balance between sensitivity and specificity depends on the strength of the Hebbian learning. Achieving a correct reconstruction in all cases, however, requires ad hoc selection of the oscillation frequency. The model represents an attempt to investigate the interactions among topological maps, autoassociative memory, and gamma-band synchronization, for recognition of abstract objects.

  14. Nanophotonic particle simulation and inverse design using artificial neural networks

    PubMed Central

    Peurifoy, John; Shen, Yichen; Jing, Li; Cano-Renteria, Fidel; DeLacy, Brendan G.; Joannopoulos, John D.; Tegmark, Max

    2018-01-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical. PMID:29868640

  15. Neural Networks In Mining Sciences - General Overview And Some Representative Examples

    NASA Astrophysics Data System (ADS)

    Tadeusiewicz, Ryszard

    2015-12-01

    The many difficult problems that must now be addressed in mining sciences make us search for ever newer and more efficient computer tools that can be used to solve those problems. Among the numerous tools of this type, there are neural networks presented in this article - which, although not yet widely used in mining sciences, are certainly worth consideration. Neural networks are a technique which belongs to so called artificial intelligence, and originates from the attempts to model the structure and functioning of biological nervous systems. Initially constructed and tested exclusively out of scientific curiosity, as computer models of parts of the human brain, neural networks have become a surprisingly effective calculation tool in many areas: in technology, medicine, economics, and even social sciences. Unfortunately, they are relatively rarely used in mining sciences and mining technology. The article is intended to convince the readers that neural networks can be very useful also in mining sciences. It contains information how modern neural networks are built, how they operate and how one can use them. The preliminary discussion presented in this paper can help the reader gain an opinion whether this is a tool with handy properties, useful for him, and what it might come in useful for. Of course, the brief introduction to neural networks contained in this paper will not be enough for the readers who get convinced by the arguments contained here, and want to use neural networks. They will still need a considerable portion of detailed knowledge so that they can begin to independently create and build such networks, and use them in practice. However, an interested reader who decides to try out the capabilities of neural networks will also find here links to references that will allow him to start exploration of neural networks fast, and then work with this handy tool efficiently. This will be easy, because there are currently quite a few ready-made computer programs, easily available, which allow their user to quickly and effortlessly create artificial neural networks, run them, train and use in practice. The key issue is the question how to use these networks in mining sciences. The fact that this is possible and desirable is shown by convincing examples included in the second part of this study. From the very rich literature on the various applications of neural networks, we have selected several works that show how and what neural networks are used in the mining industry, and what has been achieved thanks to their use. The review of applications will continue in the next article, filed already for publication in the journal "Archives of Mining Sciences". Only studying these two articles will provide sufficient knowledge for initial guidance in the area of issues under consideration here.

  16. fMRI evidence for strategic decision-making during resolution of pronoun reference

    PubMed Central

    McMillan, Corey T.; Clark, Robin; Gunawardena, Delani; Ryant, Neville; Grossman, Murray

    2012-01-01

    Pronouns are extraordinarily common in daily language yet little is known about the neural mechanisms that support decisions about pronoun reference. We propose a large-scale neural network for resolving pronoun reference that consists of two components. First, a core language network in peri-Sylvian cortex supports syntactic and semantic resources for interpreting pronoun meaning in sentences. Second, a frontal-parietal network that supports strategic decision-making is recruited to support probabilistic and risk-related components of resolving a pronoun’s referent. In an fMRI study of healthy young adults, we observed activation of left inferior frontal and superior temporal cortex, consistent with a language network. We also observed activation of brain regions not associated with traditional language areas. By manipulating the context of the pronoun, we were able to demonstrate recruitment of dorsolateral prefrontal cortex during probabilistic evaluation of a pronoun’s reference, and orbital frontal activation when a pronoun must adopt a risky referent. Together, these findings are consistent with a two-component model for resolving a pronoun’s reference that includes neuroanatomic regions supporting core linguistic and decision-making mechanisms. PMID:22245014

  17. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    PubMed

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  18. Anti-synchronization control of BAM memristive neural networks with multiple proportional delays and stochastic perturbations

    NASA Astrophysics Data System (ADS)

    Wang, Weiping; Yuan, Manman; Luo, Xiong; Liu, Linlin; Zhang, Yao

    2018-01-01

    Proportional delay is a class of unbounded time-varying delay. A class of bidirectional associative memory (BAM) memristive neural networks with multiple proportional delays is concerned in this paper. First, we propose the model of BAM memristive neural networks with multiple proportional delays and stochastic perturbations. Furthermore, by choosing suitable nonlinear variable transformations, the BAM memristive neural networks with multiple proportional delays can be transformed into the BAM memristive neural networks with constant delays. Based on the drive-response system concept, differential inclusions theory and Lyapunov stability theory, some anti-synchronization criteria are obtained. Finally, the effectiveness of proposed criteria are demonstrated through numerical examples.

  19. A biologically inspired neural network for dynamic programming.

    PubMed

    Francelin Romero, R A; Kacpryzk, J; Gomide, F

    2001-12-01

    An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.

  20. Blood glucose prediction using neural network

    NASA Astrophysics Data System (ADS)

    Soh, Chit Siang; Zhang, Xiqin; Chen, Jianhong; Raveendran, P.; Soh, Phey Hong; Yeo, Joon Hock

    2008-02-01

    We used neural network for blood glucose level determination in this study. The data set used in this study was collected using a non-invasive blood glucose monitoring system with six laser diodes, each laser diode operating at distinct near infrared wavelength between 1500nm and 1800nm. The neural network is specifically used to determine blood glucose level of one individual who participated in an oral glucose tolerance test (OGTT) session. Partial least squares regression is also used for blood glucose level determination for the purpose of comparison with the neural network model. The neural network model performs better in the prediction of blood glucose level as compared with the partial least squares model.

  1. Neural network for solving convex quadratic bilevel programming problems.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Typing mineral deposits using their associated rocks, grades and tonnages using a probabilistic neural network

    USGS Publications Warehouse

    Singer, D.A.

    2006-01-01

    A probabilistic neural network is employed to classify 1610 mineral deposits into 18 types using tonnage, average Cu, Mo, Ag, Au, Zn, and Pb grades, and six generalized rock types. The purpose is to examine whether neural networks might serve for integrating geoscience information available in large mineral databases to classify sites by deposit type. Successful classifications of 805 deposits not used in training - 87% with grouped porphyry copper deposits - and the nature of misclassifications demonstrate the power of probabilistic neural networks and the value of quantitative mineral-deposit models. The results also suggest that neural networks can classify deposits as well as experienced economic geologists. ?? International Association for Mathematical Geology 2006.

  3. Adaptive exponential synchronization of complex-valued Cohen-Grossberg neural networks with known and unknown parameters.

    PubMed

    Hu, Jin; Zeng, Chunna

    2017-02-01

    The complex-valued Cohen-Grossberg neural network is a special kind of complex-valued neural network. In this paper, the synchronization problem of a class of complex-valued Cohen-Grossberg neural networks with known and unknown parameters is investigated. By using Lyapunov functionals and the adaptive control method based on parameter identification, some adaptive feedback schemes are proposed to achieve synchronization exponentially between the drive and response systems. The results obtained in this paper have extended and improved some previous works on adaptive synchronization of Cohen-Grossberg neural networks. Finally, two numerical examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Stability analysis of fractional-order Hopfield neural networks with time delays.

    PubMed

    Wang, Hu; Yu, Yongguang; Wen, Guoguang

    2014-07-01

    This paper investigates the stability for fractional-order Hopfield neural networks with time delays. Firstly, the fractional-order Hopfield neural networks with hub structure and time delays are studied. Some sufficient conditions for stability of the systems are obtained. Next, two fractional-order Hopfield neural networks with different ring structures and time delays are developed. By studying the developed neural networks, the corresponding sufficient conditions for stability of the systems are also derived. It is shown that the stability conditions are independent of time delays. Finally, numerical simulations are given to illustrate the effectiveness of the theoretical results obtained in this paper. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Tracking the Reorganization of Module Structure in Time-Varying Weighted Brain Functional Connectivity Networks.

    PubMed

    Schmidt, Christoph; Piper, Diana; Pester, Britta; Mierau, Andreas; Witte, Herbert

    2018-05-01

    Identification of module structure in brain functional networks is a promising way to obtain novel insights into neural information processing, as modules correspond to delineated brain regions in which interactions are strongly increased. Tracking of network modules in time-varying brain functional networks is not yet commonly considered in neuroscience despite its potential for gaining an understanding of the time evolution of functional interaction patterns and associated changing degrees of functional segregation and integration. We introduce a general computational framework for extracting consensus partitions from defined time windows in sequences of weighted directed edge-complete networks and show how the temporal reorganization of the module structure can be tracked and visualized. Part of the framework is a new approach for computing edge weight thresholds for individual networks based on multiobjective optimization of module structure quality criteria as well as an approach for matching modules across time steps. By testing our framework using synthetic network sequences and applying it to brain functional networks computed from electroencephalographic recordings of healthy subjects that were exposed to a major balance perturbation, we demonstrate the framework's potential for gaining meaningful insights into dynamic brain function in the form of evolving network modules. The precise chronology of the neural processing inferred with our framework and its interpretation helps to improve the currently incomplete understanding of the cortical contribution for the compensation of such balance perturbations.

  6. High variation subarctic topsoil pollutant concentration prediction using neural network residual kriging

    NASA Astrophysics Data System (ADS)

    Sergeev, A. P.; Tarasov, D. A.; Buevich, A. G.; Subbotina, I. E.; Shichkin, A. V.; Sergeeva, M. V.; Lvova, O. A.

    2017-06-01

    The work deals with the application of neural networks residual kriging (NNRK) to the spatial prediction of the abnormally distributed soil pollutant (Cr). It is known that combination of geostatistical interpolation approaches (kriging) and neural networks leads to significantly better prediction accuracy and productivity. Generalized regression neural networks and multilayer perceptrons are classes of neural networks widely used for the continuous function mapping. Each network has its own pros and cons; however both demonstrated fast training and good mapping possibilities. In the work, we examined and compared two combined techniques: generalized regression neural network residual kriging (GRNNRK) and multilayer perceptron residual kriging (MLPRK). The case study is based on the real data sets on surface contamination by chromium at a particular location of the subarctic Novy Urengoy, Russia, obtained during the previously conducted screening. The proposed models have been built, implemented and validated using ArcGIS and MATLAB environments. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. MLRPK showed the best predictive accuracy comparing to the geostatistical approach (kriging) and even to GRNNRK.

  7. Pelvic artery calcification detection on CT scans using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Lu, Le; Yao, Jianhua; Bagheri, Mohammadhadi; Summers, Ronald M.

    2017-03-01

    Artery calcification is observed commonly in elderly patients, especially in patients with chronic kidney disease, and may affect coronary, carotid and peripheral arteries. Vascular calcification has been associated with many clinical outcomes. Manual identification of calcification in CT scans requires substantial expert interaction, which makes it time-consuming and infeasible for large-scale studies. Many works have been proposed for coronary artery calcification detection in cardiac CT scans. In these works, coronary artery extraction is commonly required for calcification detection. However, there are few works about abdominal or pelvic artery calcification detection. In this work, we present a method for automatic pelvic artery calcification detection on CT scan. This method uses the recent advanced faster region-based convolutional neural network (R-CNN) to directly identify artery calcification without a need for artery extraction since pelvic artery extraction itself is challenging. Our method first generates category-independent region proposals for each slice of the input CT scan using region proposal networks (RPN). Then, each region proposal is jointly classified and refined by softmax classifier and bounding box regressor. We applied the detection method to 500 images from 20 CT scans of patients for evaluation. The detection system achieved a 77.4% average precision and a 85% sensitivity at 1 false positive per image.

  8. Artificial Neural Network Analysis System

    DTIC Science & Technology

    2001-02-27

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  9. An Evaluation of Artificial Neural Network Modeling for Manpower Analysis

    DTIC Science & Technology

    1993-09-01

    NAVAL POSTGRADUATE SCHOOL Monterey, California 0- I 1 ’(ft ADV "’r-"A THESIS AN EVALUATION OF ARTIFICIAL NEURAL NETWORK MODELING FOR MANPOWER...AGENCY USE ONLY (Leave blank) 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED September, 1993 4. TITLE AND SUBTITLE An Evaluation Of Artificial Neural Network 5...unlimited. An Evaluation of Artificial Neural Network Modeling for Manpower Analysis by Brian J. Byrne Captain, United States Marine Corps B.S

  10. An Artificial Neural Network Control System for Spacecraft Attitude Stabilization

    DTIC Science & Technology

    1990-06-01

    NAVAL POSTGRADUATE SCHOOL Monterey, California ’-DTIC 0 ELECT f NMARO 5 191 N S, U, THESIS B . AN ARTIFICIAL NEURAL NETWORK CONTROL SYSTEM FOR...NO. NO. NO ACCESSION NO 11. TITLE (Include Security Classification) AN ARTIFICIAL NEURAL NETWORK CONTROL SYSTEM FOR SPACECRAFT ATTITUDE STABILIZATION...obsolete a U.S. G v pi.. iim n P.. oiice! toog-eo.5s43 i Approved for public release; distribution is unlimited. AN ARTIFICIAL NEURAL NETWORK CONTROL

  11. Training product unit neural networks with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Janson, D. J.; Frenzel, J. F.; Thelen, D. C.

    1991-01-01

    The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.

  12. Embedding speech into virtual realities

    NASA Technical Reports Server (NTRS)

    Bohn, Christian-Arved; Krueger, Wolfgang

    1993-01-01

    In this work a speaker-independent speech recognition system is presented, which is suitable for implementation in Virtual Reality applications. The use of an artificial neural network in connection with a special compression of the acoustic input leads to a system, which is robust, fast, easy to use and needs no additional hardware, beside a common VR-equipment.

  13. Diameter Distributions of Longleaf Pine Plantations-A Neural Network Approach

    Treesearch

    Daniel J. Leduc; Thomas G. Matney; V. Clark Baldwin

    1999-01-01

    The distribution of trees into diameter classes in longleaf pine (Pinus palustris Mill.) plantations does not tend to produce the smooth distributions common to other southern pines. While these distributions are sometimes unimodal, they are frequently bi- or even tri-modal and for this reason may not be easily modeled with traditional diameter...

  14. Intelligent Tutoring Systems: Formalization as Automata and Interface Design Using Neural Networks

    ERIC Educational Resources Information Center

    Curilem, S. Gloria; Barbosa, Andrea R.; de Azevedo, Fernando M.

    2007-01-01

    This article proposes a mathematical model of Intelligent Tutoring Systems (ITS), based on observations of the behaviour of these systems. One of the most important problems of pedagogical software is to establish a common language between the knowledge areas involved in their development, basically pedagogical, computing and domain areas. A…

  15. Multiscale Modeling of Gene-Behavior Associations in an Artificial Neural Network Model of Cognitive Development

    ERIC Educational Resources Information Center

    Thomas, Michael S. C.; Forrester, Neil A.; Ronald, Angelica

    2016-01-01

    In the multidisciplinary field of developmental cognitive neuroscience, statistical associations between levels of description play an increasingly important role. One example of such associations is the observation of correlations between relatively common gene variants and individual differences in behavior. It is perhaps surprising that such…

  16. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    PubMed

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  17. Flight control with adaptive critic neural network

    NASA Astrophysics Data System (ADS)

    Han, Dongchen

    2001-10-01

    In this dissertation, the adaptive critic neural network technique is applied to solve complex nonlinear system control problems. Based on dynamic programming, the adaptive critic neural network can embed the optimal solution into a neural network. Though trained off-line, the neural network forms a real-time feedback controller. Because of its general interpolation properties, the neurocontroller has inherit robustness. The problems solved here are an agile missile control for U.S. Air Force and a midcourse guidance law for U.S. Navy. In the first three papers, the neural network was used to control an air-to-air agile missile to implement a minimum-time heading-reverse in a vertical plane corresponding to following conditions: a system without constraint, a system with control inequality constraint, and a system with state inequality constraint. While the agile missile is a one-dimensional problem, the midcourse guidance law is the first test-bed for multiple-dimensional problem. In the fourth paper, the neurocontroller is synthesized to guide a surface-to-air missile to a fixed final condition, and to a flexible final condition from a variable initial condition. In order to evaluate the adaptive critic neural network approach, the numerical solutions for these cases are also obtained by solving two-point boundary value problem with a shooting method. All of the results showed that the adaptive critic neural network could solve complex nonlinear system control problems.

  18. Performance of an artificial neural network for vertical root fracture detection: an ex vivo study.

    PubMed

    Kositbowornchai, Suwadee; Plermkamon, Supattra; Tangkosol, Tawan

    2013-04-01

    To develop an artificial neural network for vertical root fracture detection. A probabilistic neural network design was used to clarify whether a tooth root was sound or had a vertical root fracture. Two hundred images (50 sound and 150 vertical root fractures) derived from digital radiography--used to train and test the artificial neural network--were divided into three groups according to the number of training and test data sets: 80/120,105/95 and 130/70, respectively. Either training or tested data were evaluated using grey-scale data per line passing through the root. These data were normalized to reduce the grey-scale variance and fed as input data of the neural network. The variance of function in recognition data was calculated between 0 and 1 to select the best performance of neural network. The performance of the neural network was evaluated using a diagnostic test. After testing data under several variances of function, we found the highest sensitivity (98%), specificity (90.5%) and accuracy (95.7%) occurred in Group three, for which the variance of function in recognition data was between 0.025 and 0.005. The neural network designed in this study has sufficient sensitivity, specificity and accuracy to be a model for vertical root fracture detection. © 2012 John Wiley & Sons A/S.

  19. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  20. A common network of functional areas for attention and eye movements

    NASA Technical Reports Server (NTRS)

    Corbetta, M.; Akbudak, E.; Conturo, T. E.; Snyder, A. Z.; Ollinger, J. M.; Drury, H. A.; Linenweber, M. R.; Petersen, S. E.; Raichle, M. E.; Van Essen, D. C.; hide

    1998-01-01

    Functional magnetic resonance imaging (fMRI) and surface-based representations of brain activity were used to compare the functional anatomy of two tasks, one involving covert shifts of attention to peripheral visual stimuli, the other involving both attentional and saccadic shifts to the same stimuli. Overlapping regional networks in parietal, frontal, and temporal lobes were active in both tasks. This anatomical overlap is consistent with the hypothesis that attentional and oculomotor processes are tightly integrated at the neural level.

Top