Science.gov

Sample records for multilayer neural networks

  1. Target detection using multilayer feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Scherf, Alan V.; Scott, Peter A.

    1991-08-01

    Multilayer feedforward neural networks have been integrated with conventional image processing techniques to form a hybrid target detection algorithm for use in the F/A-18 FLIR pod advanced air-to-air track-while-scan mode. The network has been trained to detect and localize small targets in infrared imagery. Comparative performance between this target detection technique is evaluated.

  2. Membership generation using multilayer neural network

    NASA Technical Reports Server (NTRS)

    Kim, Jaeseok

    1992-01-01

    There has been intensive research in neural network applications to pattern recognition problems. Particularly, the back-propagation network has attracted many researchers because of its outstanding performance in pattern recognition applications. In this section, we describe a new method to generate membership functions from training data using a multilayer neural network. The basic idea behind the approach is as follows. The output values of a sigmoid activation function of a neuron bear remarkable resemblance to membership values. Therefore, we can regard the sigmoid activation values as the membership values in fuzzy set theory. Thus, in order to generate class membership values, we first train a suitable multilayer network using a training algorithm such as the back-propagation algorithm. After the training procedure converges, the resulting network can be treated as a membership generation network, where the inputs are feature values and the outputs are membership values in the different classes. This method allows fairly complex membership functions to be generated because the network is highly nonlinear in general. Also, it is to be noted that the membership functions are generated from a classification point of view. For pattern recognition applications, this is highly desirable, although the membership values may not be indicative of the degree of typicality of a feature value in a particular class.

  3. Extrapolation limitations of multilayer feedforward neural networks

    NASA Technical Reports Server (NTRS)

    Haley, Pamela J.; Soloway, Donald

    1992-01-01

    The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.

  4. Blur identification by multilayer neural network based on multivalued neurons.

    PubMed

    Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T

    2008-05-01

    A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones. PMID:18467216

  5. Incremental communication for multilayer neural networks: error analysis.

    PubMed

    Ghorbani, A A; Bhavsar, V C

    1998-01-01

    Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed (1995) an incremental internode communication method. In the incremental communication method, instead of communicating the full magnitude of the output value of a node, only the increment or decrement to its previous value is sent to a communication link. In this paper, the effects of the limited precision incremental communication method on the convergence behavior and performance of multilayer neural networks are investigated. The nonlinear aspects of representing the incremental values with reduced (limited) precision for the commonly used error backpropagation training algorithm are analyzed. It is shown that the nonlinear effect of small perturbations in the input(s)/output of a node does not cause instability. The analysis is supported by simulation studies of two problems. The simulation results demonstrate that the limited precision errors are bounded and do not seriously affect the convergence of multilayer neural networks. PMID:18252431

  6. Learning with regularizers in multilayer neural networks

    NASA Astrophysics Data System (ADS)

    Saad, David; Rattray, Magnus

    1998-02-01

    We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labeled by a two-layer teacher network with an arbitrary number of hidden units that may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.

  7. Multilayer neural networks with extensively many hidden units.

    PubMed

    Rosen-Zvi, M; Engel, A; Kanter, I

    2001-08-13

    The information processing abilities of a multilayer neural network with a number of hidden units scaling as the input dimension are studied using statistical mechanics methods. The mapping from the input layer to the hidden units is performed by general symmetric Boolean functions, whereas the hidden layer is connected to the output by either discrete or continuous couplings. Introducing an overlap in the space of Boolean functions as order parameter, the storage capacity is found to scale with the logarithm of the number of implementable Boolean functions. The generalization behavior is smooth for continuous couplings and shows a discontinuous transition to perfect generalization for discrete ones. PMID:11497920

  8. Multi-Layer and Recursive Neural Networks for Metagenomic Classification.

    PubMed

    Ditzler, Gregory; Polikar, Robi; Rosen, Gail

    2015-09-01

    Recent advances in machine learning, specifically in deep learning with neural networks, has made a profound impact on fields such as natural language processing, image classification, and language modeling; however, feasibility and potential benefits of the approaches to metagenomic data analysis has been largely under-explored. Deep learning exploits many layers of learning nonlinear feature representations, typically in an unsupervised fashion, and recent results have shown outstanding generalization performance on previously unseen data. Furthermore, some deep learning methods can also represent the structure in a data set. Consequently, deep learning and neural networks may prove to be an appropriate approach for metagenomic data. To determine whether such approaches are indeed appropriate for metagenomics, we experiment with two deep learning methods: i) a deep belief network, and ii) a recursive neural network, the latter of which provides a tree representing the structure of the data. We compare these approaches to the standard multi-layer perceptron, which has been well-established in the machine learning community as a powerful prediction algorithm, though its presence is largely missing in metagenomics literature. We find that traditional neural networks can be quite powerful classifiers on metagenomic data compared to baseline methods, such as random forests. On the other hand, while the deep learning approaches did not result in improvements to the classification accuracy, they do provide the ability to learn hierarchical representations of a data set that standard classification methods do not allow. Our goal in this effort is not to determine the best algorithm in terms accuracy-as that depends on the specific application-but rather to highlight the benefits and drawbacks of each of the approach we discuss and provide insight on how they can be improved for predictive metagenomic analysis. PMID:26316190

  9. Inversion of Self Potential Anomalies with Multilayer Perceptron Neural Networks

    NASA Astrophysics Data System (ADS)

    Kaftan, Ilknur; Sındırgı, Petek; Akdemir, Özer

    2014-08-01

    This study investigates the inverse solution on a buried and polarized sphere-shaped body using the self-potential method via multilayer perceptron neural networks (MLPNN). The polarization angle ( α), depth to the centre of sphere ( h), electrical dipole moment ( K) and the zero distance from the origin ( x 0) were estimated. For testing the success of the MLPNN for sphere model, parameters were also estimated by the traditional Damped Least Squares (Levenberg-Marquardt) inversion technique (DLS). The MLPNN was first tested on a synthetic example. The performance of method was also tested for two S/N ratios (5 % and 10 %) by adding noise to the same synthetic data, the estimated model parameters with MLPNN and DLS method are satisfactory. The MLPNN also applied for the field data example in İzmir, Urla district, Turkey, with two cross-section data evaluated by MLPNN and DLS, and the two methods showed good agreement.

  10. Optical proximity correction using a multilayer perceptron neural network

    NASA Astrophysics Data System (ADS)

    Luo, Rui

    2013-07-01

    Optical proximity correction (OPC) is one of the resolution enhancement techniques (RETs) in optical lithography, where the mask pattern is modified to improve the output pattern fidelity. Algorithms are needed to generate the modified mask pattern automatically and efficiently. In this paper, a multilayer perceptron (MLP) neural network (NN) is used to synthesize the mask pattern. We employ the pixel-based approach in this work. The MLP takes the pixel values of the desired output wafer pattern as input, and outputs the optimal mask pixel values. The MLP is trained with the backpropagation algorithm, with a training set retrieved from the desired output pattern, and the optimal mask pattern obtained by the model-based method. After training, the MLP is able to generate the optimal mask pattern non-iteratively with good pattern fidelity.

  11. Robust local stability of multilayer recurrent neural networks.

    PubMed

    Suykens, J K; De Moor, B; Vandewalle, J

    2000-01-01

    In this paper we derive a condition for robust local stability of multilayer recurrent neural networks with two hidden layers. The stability condition follows from linking theory about linearization, robustness analysis of linear systems under nonlinear perturbation and matrix inequalities. A characterization of the basin of attraction of the origin is given in terms of the level set of a quadratic Lyapunov function. In a similar way like for NL theory, local stability is imposed around the origin and the apparent basin of attraction is made large by applying the criterion, while the proven basin of attraction is relatively small due to conservatism of the criterion. Modifying dynamic backpropagation by the new stability condition is discussed and illustrated by simulation examples. PMID:18249754

  12. Parallel multilayer perceptron neural network used for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Garcia-Salgado, Beatriz P.; Ponomaryov, Volodymyr I.; Robles-Gonzalez, Marco A.

    2016-04-01

    This study is focused on time optimization for the classification problem presenting a comparison of five Artificial Neural Network Multilayer Perceptron (ANN-MLP) architectures. We use the Artificial Neural Network (ANN) because it allows to recognize patterns in data in a lower time rate. Time and classification accuracy are taken into account together for the comparison. According to time comparison, two paradigms in the computational field for each ANN-MLP architecture are analysed with three schemes. Firstly, sequential programming is applied by using a single CPU core. Secondly, parallel programming is employed over a multi-core CPU architecture. Finally, a programming model running on GPU architecture is implemented. Furthermore, the classification accuracy is compared between the proposed five ANN-MLP architectures and a state-of.the-art Support Vector Machine (SVM) with three classification frames: 50%,60% and 70% of the data set's observations are randomly selected to train the classifiers. Also, a visual comparison of the classified results is presented. The Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) criteria are also calculated to characterise visual perception. The images employed were acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), the Reflective Optics System Imaging Spectrometer (ROSIS) and the Hyperion sensor.

  13. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks

    PubMed Central

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  14. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    PubMed

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  15. Unsupervised classification of neural spikes with a hybrid multilayer artificial neural network.

    PubMed

    García, P; Suárez, C P; Rodríguez, J; Rodríguez, M

    1998-07-01

    The understanding of the brain structure and function and its computational style is one of the biggest challenges both in Neuroscience and Neural Computation. In order to reach this and to test the predictions of neural network modeling, it is necessary to observe the activity of neural populations. In this paper we propose a hybrid modular computational system for the spike classification of multiunits recordings. It works with no knowledge about the waveform, and it consists of two moduli: a Preprocessing (Segmentation) module, which performs the detection and centering of spike vectors using programmed computation; and a Processing (Classification) module, which implements the general approach of neural classification: feature extraction, clustering and discrimination, by means of a hybrid unsupervised multilayer artificial neural network (HUMANN). The operations of this artificial neural network on the spike vectors are: (i) compression with a Sanger Layer from 70 points vector to five principal component vector; (ii) their waveform is analyzed by a Kohonen layer; (iii) the electrical noise and overlapping spikes are rejected by a previously unreported artificial neural network named Tolerance layer; and (iv) finally the spikes are labeled into spike classes by a Labeling layer. Each layer of the system has a specific unsupervised learning rule that progressively modifies itself until the performance of the layer has been automatically optimized. The procedure showed a high sensitivity and specificity also when working with signals containing four spike types. PMID:10223516

  16. Classification of fuels using multilayer perceptron neural networks

    NASA Astrophysics Data System (ADS)

    Ozaki, Sérgio T. R.; Wiziack, Nadja K. L.; Paterno, Leonardo G.; Fonseca, Fernando J.

    2009-05-01

    Electrical impedance data obtained with an array of conducting polymer chemical sensors was used by a neural network (ANN) to classify fuel adulteration. Real samples were classified with accuracy greater than 90% in two groups: approved and adulterated.

  17. Classification of fuels using multilayer perceptron neural networks

    SciTech Connect

    Ozaki, Sergio T. R.; Wiziack, Nadja K. L.; Paterno, Leonardo G.; Fonseca, Fernando J.

    2009-05-23

    Electrical impedance data obtained with an array of conducting polymer chemical sensors was used by a neural network (ANN) to classify fuel adulteration. Real samples were classified with accuracy greater than 90% in two groups: approved and adulterated.

  18. When are two multi-layer cellular neural networks the same?

    PubMed

    Ban, Jung-Chao; Chang, Chih-Hung

    2016-07-01

    This paper aims to characterize whether a multi-layer cellular neural network is of deep architecture; namely, when can an n-layer cellular neural network be replaced by an m-layer cellular neural network for mnetwork is revealed. PMID:27085113

  19. On the capacity of multilayer neural networks trained with backpropagation.

    PubMed

    Miranda, E N

    2000-08-01

    The capacity of a layered neural network for learning hetero-associations is studied numerically as a function of the number M of hidden neurons. We find that there is a sharp change in the learning ability of the network as the number of hetero-associations increases. This fact allows us to define a maximum capacity C for a given architecture. It is found that C grows logarithmically with M. PMID:11052415

  20. Neural networks and chaos: Construction, evaluation of chaotic networks, and prediction of chaos with multilayer feedforward networks

    NASA Astrophysics Data System (ADS)

    Bahi, Jacques M.; Couchot, Jean-François; Guyeux, Christophe; Salomon, Michel

    2012-03-01

    Many research works deal with chaotic neural networks for various fields of application. Unfortunately, up to now, these networks are usually claimed to be chaotic without any mathematical proof. The purpose of this paper is to establish, based on a rigorous theoretical framework, an equivalence between chaotic iterations according to Devaney and a particular class of neural networks. On the one hand, we show how to build such a network, on the other hand, we provide a method to check if a neural network is a chaotic one. Finally, the ability of classical feedforward multilayer perceptrons to learn sets of data obtained from a dynamical system is regarded. Various boolean functions are iterated on finite states. Iterations of some of them are proven to be chaotic as it is defined by Devaney. In that context, important differences occur in the training process, establishing with various neural networks that chaotic behaviors are far more difficult to learn.

  1. Weight-decay induced phase transitions in multilayer neural networks

    NASA Astrophysics Data System (ADS)

    Ahr, M.; Biehl, M.; Schlösser, E.

    1999-07-01

    We investigate layered neural networks with differentiable activation function and student vectors without normalization constraint by means of equilibrium statistical physics. We consider the learning of perfectly realizable rules and find that the length of student vectors becomes infinite, unless a proper weight decay term is added to the energy. Then, the system undergoes a first-order phase transition between states with very long student vectors and states where the lengths are comparable to those of the teacher vectors. Additionally, in both configurations there is a phase transition between a specialized and an unspecialized phase. An anti-specialized phase with long student vectors exists in networks with a small number of hidden units.

  2. Geomagnetic storms prediction from InterMagnetic Observatories data using the Multilayer Perceptron neural network

    NASA Astrophysics Data System (ADS)

    Ouadfeul, S.; Aliouane, L.; Tourtchine, V.

    2013-09-01

    In this paper, a tentative of geomagnetic storms prediction is implanted by analyzing the International Real-Time Magnetic Observatory Network data using the Artificial Neural Network (ANN). The implanted method is based on the prediction of future horizontal geomagnetic field component using a Multilayer Perceptron (MLP) neural network model. The input is the time and the output is the X and Y magnetic field components. Application to geomagnetic data of Mai 2002 shows that the implanted ANN model can greatly help the geomagnetic storms prediction.

  3. Incorporation of liquid-crystal light valve nonlinearities in optical multilayer neural networks.

    PubMed

    Moerland, P D; Fiesler, E; Saxena, I

    1996-09-10

    Sigmoidlike activation functions, as available in analog hardware, differ in various ways from the standard sigmoidal function because they are usually asymmetric, truncated, and have a nonstandard gain. We present an adaptation of the backpropagation learning rule to compensate for these nonstandard sigmoids. This method is applied to multilayer neural networks with all-optical forward propagation and liquid-crystal light valves (LCLV) as optical thresholding devices. The results of simulations of a backpropagation neural network with five different LCLV response curves as activation functions are presented. Although LCLV's perform poorly with the standard backpropagation algorithm, it is shown that our adapted learning rule performs well with these LCLV curves. PMID:21127522

  4. Multi-layer neural networks for robot control

    NASA Technical Reports Server (NTRS)

    Pourboghrat, Farzad

    1989-01-01

    Two neural learning controller designs for manipulators are considered. The first design is based on a neural inverse-dynamics system. The second is the combination of the first one with a neural adaptive state feedback system. Both types of controllers enable the manipulator to perform any given task very well after a period of training and to do other untrained tasks satisfactorily. The second design also enables the manipulator to compensate for unpredictable perturbations.

  5. Optimal Parameter for the Training of Multilayer Perceptron Neural Networks by Using Hierarchical Genetic Algorithm

    SciTech Connect

    Orozco-Monteagudo, Maykel; Taboada-Crispi, Alberto; Gutierrez-Hernandez, Liliana

    2008-11-06

    This paper deals with the controversial topic of the selection of the parameters of a genetic algorithm, in this case hierarchical, used for training of multilayer perceptron neural networks for the binary classification. The parameters to select are the crossover and mutation probabilities of the control and parametric genes and the permanency percent. The results can be considered as a guide for using this kind of algorithm.

  6. Existence and stability of traveling wave solutions for multilayer cellular neural networks

    NASA Astrophysics Data System (ADS)

    Hsu, Cheng-Hsiung; Lin, Jian-Jhong; Yang, Tzi-Sheng

    2015-08-01

    The purpose of this article is to investigate the existence and stability of traveling wave solutions for one-dimensional multilayer cellular neural networks. We first establish the existence of traveling wave solutions using the truncated technique. Then we study the asymptotic behaviors of solutions for the Cauchy problem of the neural model. Applying two kinds of comparison principles and the weighed energy method, we show that all solutions of the Cauchy problem converge exponentially to the traveling wave solutions provided that the initial data belong to a suitable weighted space.

  7. Application of Multilayer Feedforward Neural Networks to Precipitation Cell-Top Altitude Estimation

    NASA Technical Reports Server (NTRS)

    Spina, Michelle S.; Schwartz, Michael J.; Staelin, David H.; Gasiewski, Albin J.

    1998-01-01

    The use of passive 118-GHz O2 observations of rain cells for precipitation cell-top altitude estimation is demonstrated by using a multilayer feed forward neural network retrieval system. Rain cell observations at 118 GHz were compared with estimates of the cell-top altitude obtained by optical stereoscopy. The observations were made with 2 4 km horizontal spatial resolution by using the Millimeter-wave Temperature Sounder (MTS) scanning spectrometer aboard the NASA ER-2 research aircraft during the Genesis of Atlantic Lows Experiment (GALE) and the COoperative Huntsville Meteorological EXperiment (COHMEX) in 1986. The neural network estimator applied to MTS spectral differences between clouds, and nearby clear air yielded an rms discrepancy of 1.76 km for a combined cumulus, mature, and dissipating cell set and 1.44 km for the cumulus-only set. An improvement in rms discrepancy to 1.36 km was achieved by including additional MTS information on the absolute atmospheric temperature profile. An incremental method for training neural networks was developed that yielded robust results, despite the use of as few as 56 training spectra. Comparison of these results with a nonlinear statistical estimator shows that superior results can be obtained with a neural network retrieval system. Imagery of estimated cell-top altitudes was created from 118-GHz spectral imagery gathered from CAMEX, September through October 1993, and from cyclone Oliver, February 7, 1993.

  8. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN. PMID:18263301

  9. Classification of normal and abnormal electrogastrograms using multilayer feedforward neural networks.

    PubMed

    Lin, Z; Maris, J; Hermans, L; Vandewalle, J; Chen, J D

    1997-05-01

    A neural network approach is proposed for the automated classification of the normal and abnormal EGG. Two learning algorithms, the quasi-Newton and the scaled conjugate gradient method for the multilayer feedforward neural networks (MFNN), are introduced and compared with the error backpropagation algorithm. The configurations of the MFNN are determined by experiment. The raw EGG data, its power spectral data, and its autoregressive moving average (ARMA) modelling parameters are used as the input to the MFNN and compared with each other. Three indexes (the percent correct, sum-squared error and complexity per iteration) are used to evaluate the performance of each learning algorithm. The results show that the scaled conjugate gradient algorithm performs best, in that it is robust and provides a super-linear convergence rate. The power spectral representation and the ARMA modelling parameters of the EGG are found to be better types of the input to the network for this specific application, both yielding a percent correctness of 95% on the test set. Although the results are focused on the classification of the EGG, this paper should provide useful information for the classification of other biomedical signals. PMID:9246852

  10. Analysis of (7)Be behaviour in the air by using a multilayer perceptron neural network.

    PubMed

    Samolov, A; Dragović, S; Daković, M; Bačić, G

    2014-11-01

    A multilayer perceptron artificial neural network (ANN) model for the prediction of the (7)Be behaviour in the air as the function of meteorological parameters was developed. The model was optimized and tested using (7)Be activity concentrations obtained by standard gamma-ray spectrometric analysis of air samples collected in Belgrade (Serbia) during 2009-2011 and meteorological data for the same period. Good correlation (r = 0.91) between experimental values of (7)Be activity concentrations and those predicted by ANN was obtained. The good performance of the model in prediction of (7)Be activity concentrations could provide basis for construction of models which would forecast behaviour of other airborne radionuclides. PMID:25106024

  11. Near-infrared spectroscopic measurements of blood analytes using multi-layer perceptron neural networks.

    PubMed

    Kalamatianos, Dimitrios; Liatsis, Panos; Wellstead, Peter E

    2006-01-01

    Near-infrared (NIR) spectroscopy is being applied to the solution of problems in many areas of biomedical and pharmaceutical research. In this paper we investigate the use of NIR spectroscopy as an analytical tool to quantify concentrations of urea, creatinine, glucose and oxyhemoglobin (HbO2). Measurements have been made in vitro with a portable spectrometer developed in our labs that consists of a two beam interferometer operating in the range of 800-2300 nm. For the data analysis a pattern recognition philosophy was used with a preprocessing stage and a multi-layer perceptron (MLP) neural network for the measurement stage. Results show that the interferogram signatures of the above compounds are sufficiently strong in that spectral range. Measurements of three different concentrations were possible with mean squared error (MSE) of the order of 10(-6). PMID:17947035

  12. Intelligent detection of impulse noise using multilayer neural network with multi-valued neurons

    NASA Astrophysics Data System (ADS)

    Aizenberg, Igor; Wallace, Glen

    2012-03-01

    In this paper, we solve the impulse noise detection problem using an intelligent approach. We use a multilayer neural network based on multi-valued neurons (MLMVN) as an intelligent impulse noise detector. MLMVN was already used for point spread function identification and intelligent edge enhancement. So it is very attractive to apply it for solving another image processing problem. The main result, which is presented in the paper, is the proven ability of MLMVN to detect impulse noise on different images after a learning session with the data taken just from a single noisy image. Hence MLMVN can be used as a robust impulse detector. It is especially efficient for salt and pepper noise detection and outperforms all competitive techniques. It also shows comparable results in detection of random impulse noise. Moreover, for random impulse noise detection, MLMVN with the output neuron with a periodic activation function is used for the first time.

  13. A selective learning method to improve the generalization of multilayer feedforward neural networks.

    PubMed

    Galván, I M; Isasi, P; Aler, R; Valls, J M

    2001-04-01

    Multilayer feedforward neural networks with backpropagation algorithm have been used successfully in many applications. However, the level of generalization is heavily dependent on the quality of the training data. That is, some of the training patterns can be redundant or irrelevant. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. Nevertheless, generalization is carried out independently of the novel patterns to be approximated. In this paper, we present a learning method that automatically selects the training patterns more appropriate to the new sample to be predicted. This training method follows a lazy learning strategy, in the sense that it builds approximations centered around the novel sample. The proposed method has been applied to three different domains: two artificial approximation problems and a real time series prediction problem. Results have been compared to standard backpropagation using the complete training data set and the new method shows better generalization abilities. PMID:14632169

  14. A design philosophy for multi-layer neural networks with applications to robot control

    NASA Technical Reports Server (NTRS)

    Vadiee, Nader; Jamshidi, MO

    1989-01-01

    A system is proposed which receives input information from many sensors that may have diverse scaling, dimension, and data representations. The proposed system tolerates sensory information with faults. The proposed self-adaptive processing technique has great promise in integrating the techniques of artificial intelligence and neural networks in an attempt to build a more intelligent computing environment. The proposed architecture can provide a detailed decision tree based on the input information, information stored in a long-term memory, and the adapted rule-based knowledge. A mathematical model for analysis will be obtained to validate the cited hypotheses. An extensive software program will be developed to simulate a typical example of pattern recognition problem. It is shown that the proposed model displays attention, expectation, spatio-temporal, and predictory behavior which are specific to the human brain. The anticipated results of this research project are: (1) creation of a new dynamic neural network structure, and (2) applications to and comparison with conventional multi-layer neural network structures. The anticipated benefits from this research are vast. The model can be used in a neuro-computer architecture as a building block which can perform complicated, nonlinear, time-varying mapping from a multitude of input excitory classes to an output or decision environment. It can be used for coordinating different sensory inputs and past experience of a dynamic system and actuating signals. The commercial applications of this project can be the creation of a special-purpose neuro-computer hardware which can be used in spatio-temporal pattern recognitions in such areas as air defense systems, e.g., target tracking, and recognition. Potential robotics-related applications are trajectory planning, inverse dynamics computations, hierarchical control, task-oriented control, and collision avoidance.

  15. A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks.

    PubMed

    Cavalieri, Salvatore; Mirabella, Orazio

    1999-01-01

    The paper deals with the problem of fault tolerance in a multilayer perceptron network. Although it already possesses a reasonable fault tolerance capability, it may be insufficient in particularly critical applications. Studies carried out by the authors have shown that the traditional backpropagation learning algorithm may entail the presence of a certain number of weights with a much higher absolute value than the others. Further studies have shown that faults in these weights is the main cause of deterioration in the performance of the neural network. In other words, the main cause of incorrect network functioning on the occurrence of a fault is the non-uniform distribution of absolute values of weights in each layer. The paper proposes a learning algorithm which updates the weights, distributing their absolute values as uniformly as possible in each layer. Tests performed on benchmark test sets have shown the considerable increase in fault tolerance obtainable with the proposed approach as compared with the traditional backpropagation algorithm and with some of the most efficient fault tolerance approaches to be found in literature. PMID:12662719

  16. Portraying emotions at their unfolding: a multilayered approach for probing dynamics of neural networks.

    PubMed

    Raz, Gal; Winetraub, Yonatan; Jacob, Yael; Kinreich, Sivan; Maron-Katz, Adi; Shaham, Galit; Podlipsky, Ilana; Gilam, Gadi; Soreq, Eyal; Hendler, Talma

    2012-04-01

    Dynamic functional integration of distinct neural systems plays a pivotal role in emotional experience. We introduce a novel approach for studying emotion-related changes in the interactions within and between networks using fMRI. It is based on continuous computation of a network cohesion index (NCI), which is sensitive to both strength and variability of signal correlations between pre-defined regions. The regions encompass three clusters (namely limbic, medial prefrontal cortex (mPFC) and cognitive), each previously was shown to be involved in emotional processing. Two sadness-inducing film excerpts were viewed passively, and comparisons between viewer's rated sadness, parasympathetic, and inter-NCI and intra-NCI were obtained. Limbic intra-NCI was associated with reported sadness in both movies. However, the correlation between the parasympathetic-index, the rated sadness and the limbic-NCI occurred in only one movie, possibly related to a "deactivated" pattern of sadness. In this film, rated sadness intensity also correlated with the mPFC intra-NCI, possibly reflecting temporal correspondence between sadness and sympathy. Further, only for this movie, we found an association between sadness rating and the mPFC-limbic inter-NCI time courses. To the contrary, in the other film in which sadness was reported to commingle with horror and anger, dramatic events coincided with disintegration of these networks. Together, this may point to a difference between the cinematic experiences with regard to inter-network dynamics related to emotional regulation. These findings demonstrate the advantage of a multi-layered dynamic analysis for elucidating the uniqueness of emotional experiences with regard to an unguided processing of continuous and complex stimulation. PMID:22285693

  17. Planes coordinates transformation between PSAD56 to SIRGAS using a Multilayer Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Tierra, Alfonso; Romero, Ricardo

    2014-12-01

    Prior any satellite technology developments, the geodetic networks of a country were realized from a topocentric datum, and hence the respective cartography was performed. With availability of Global Navigation Satellite Systems-GNSS, cartography needs to be updated and referenced to a geocentric datum to be compatible with this technology. Cartography in Ecuador has been performed using the PSAD56 (Provisional South American Datum 1956) systems, nevertheless it's necessary to have inside the system SIRGAS (SIstema de Referencia Geocéntrico para las AmericaS). This transformation between PSAD56 to SIRGAS use seven transformation parameters calculated with the method Helmert. These parameters, in case of Ecuador are compatible for scales of 1:25 000 or less, that does not satisfy the requirements on applications for major scales. In this study, the technique of neural networks is demonstrated as an alternative for improving the processing of UTM planes coordinates E, N (East, North) from PSAD56 to SIRGAS. Therefore, from the coordinates E, N, of the two systems, four transformation parameters were calculated (two of translation, one of rotation, and one scale difference) using the technique bidimensional transformation. Additionally, the same coordinates were used to training Multilayer Artificial Neural Network -MANN, in which the inputs are the coordinates E, N in PSAD56 and output are the coordinates E, N in SIRGAS. Both the two-dimensional transformation and ANN were used as control points to determine the differences between the mentioned methods. The results imply that, the coordinates transformation obtained with the artificial neural network multilayer trained have been improving the results that the bidimensional transformation, and compatible to scales 1:5000. Dostęp do nowoczesnych technologii, w tym GNSS umożliwiły dokładniejsze zdefi niowanie systemów odniesień przestrzennych wykorzystywanych m.in. w defi niowaniu krajowych układów odniesień i

  18. The No-Prop algorithm: a new learning algorithm for multilayer neural networks.

    PubMed

    Widrow, Bernard; Greenblatt, Aaron; Kim, Youngsik; Park, Dookun

    2013-01-01

    A new learning algorithm for multilayer neural networks that we have named No-Propagation (No-Prop) is hereby introduced. With this algorithm, the weights of the hidden-layer neurons are set and fixed with random values. Only the weights of the output-layer neurons are trained, using steepest descent to minimize mean square error, with the LMS algorithm of Widrow and Hoff. The purpose of introducing nonlinearity with the hidden layers is examined from the point of view of Least Mean Square Error Capacity (LMS Capacity), which is defined as the maximum number of distinct patterns that can be trained into the network with zero error. This is shown to be equal to the number of weights of each of the output-layer neurons. The No-Prop algorithm and the Back-Prop algorithm are compared. Our experience with No-Prop is limited, but from the several examples presented here, it seems that the performance regarding training and generalization of both algorithms is essentially the same when the number of training patterns is less than or equal to LMS Capacity. When the number of training patterns exceeds Capacity, Back-Prop is generally the better performer. But equivalent performance can be obtained with No-Prop by increasing the network Capacity by increasing the number of neurons in the hidden layer that drives the output layer. The No-Prop algorithm is much simpler and easier to implement than Back-Prop. Also, it converges much faster. It is too early to definitively say where to use one or the other of these algorithms. This is still a work in progress. PMID:23140797

  19. Memristor-based multilayer neural networks with online gradient descent training.

    PubMed

    Soudry, Daniel; Di Castro, Dotan; Gal, Asaf; Kolodny, Avinoam; Kvatinsky, Shahar

    2015-10-01

    Learning in multilayer neural networks (MNNs) relies on continuous updating of large matrices of synaptic weights by local rules. Such locality can be exploited for massive parallelism when implementing MNNs in hardware. However, these update rules require a multiply and accumulate operation for each synaptic weight, which is challenging to implement compactly using CMOS. In this paper, a method for performing these update operations simultaneously (incremental outer products) using memristor-based arrays is proposed. The method is based on the fact that, approximately, given a voltage pulse, the conductivity of a memristor will increment proportionally to the pulse duration multiplied by the pulse magnitude if the increment is sufficiently small. The proposed method uses a synaptic circuit composed of a small number of components per synapse: one memristor and two CMOS transistors. This circuit is expected to consume between 2% and 8% of the area and static power of previous CMOS-only hardware alternatives. Such a circuit can compactly implement hardware MNNs trainable by scalable algorithms based on online gradient descent (e.g., backpropagation). The utility and robustness of the proposed memristor-based circuit are demonstrated on standard supervised learning tasks. PMID:25594981

  20. Prediction for energy content of Taiwan municipal solid waste using multilayer perceptron neural networks.

    PubMed

    Shu, Hung-Yee; Lu, Hsin-Chung; Fan, Huan-Jung; Chang, Ming-Chin; Chen, Jyh-Cherng

    2006-06-01

    In the past decade, the treatment amount of municipal solid waste (MSW) by incineration has increased significantly in Taiwan. By year 2008, approximately 70% of the total MSW generated will be incinerated. The energy content (usually expressed by lower heating value [LHV]) of MSW is an important parameter for the selection of incinerator capacity. In this work, wastes from 55 sampling sites, including villages, towns, cities, and remote islands in the Taiwan area, were sampled and analyzed once a season from April 2002 to March 2003 to determine the waste characteristics. The LHV of MSW in Taiwan was predicted by the multilayer perceptron (MLP) neural networks model using the input parameters of elemental analysis and dry- or wet-base physical compositions. Although all three of the models predicted LHV values rather accurately, the elemental analysis model provided the most accurate prediction of LHV values. Additionally, the wet-base physical composition model was the easiest and most economical. Therefore, the waste treatment operators can choose the more appropriate analysis method considering situations themselves, such as time, equipment, technology, and cost. PMID:16805410

  1. Multilayer perceptron neural network for downscaling rainfall in arid region: A case study of Baluchistan, Pakistan

    NASA Astrophysics Data System (ADS)

    Ahmed, Kamal; Shahid, Shamsuddin; Haroon, Sobri Bin; Xiao-jun, Wang

    2015-08-01

    Downscaling rainfall in an arid region is much challenging compared to wet region due to erratic and infrequent behaviour of rainfall in the arid region. The complexity is further aggregated due to scarcity of data in such regions. A multilayer perceptron (MLP) neural network has been proposed in the present study for the downscaling of rainfall in the data scarce arid region of Baluchistan province of Pakistan, which is considered as one of the most vulnerable areas of Pakistan to climate change. The National Center for Environmental Prediction (NCEP) reanalysis datasets from 20 grid points surrounding the study area were used to select the predictors using principal component analysis. Monthly rainfall data for the time periods 1961-1990 and 1991-2001 were used for the calibration and validation of the MLP model, respectively. The performance of the model was assessed using various statistics including mean, variance, quartiles, root mean square error (RMSE), mean bias error (MBE), coefficient of determination (R 2) and Nash-Sutcliffe efficiency (NSE). Comparisons of mean monthly time series of observed and downscaled rainfall showed good agreement during both calibration and validation periods, while the downscaling model was found to underpredict rainfall variance in both periods. Other statistical parameters also revealed good agreement between observed and downscaled rainfall during both calibration and validation periods in most of the stations.

  2. Multi-layer holographic bifurcative neural network system for real-time adaptive EOS data analysis

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Huang, K.; Diep, J.

    1992-01-01

    Optical data processing techniques have the inherent advantage of high data throughout, low weight and low power requirements. These features are particularly desirable for onboard spacecraft in-situ real-time data analysis and data compression applications. The proposed multi-layer optical holographic neural net pattern recognition technique will utilize the nonlinear photorefractive devices for real-time adaptive learning to classify input data content and recognize unexpected features. Information can be stored either in analog or digital form in a nonlinear photorefractive device. The recording can be accomplished in time scales ranging from milliseconds to microseconds. When a system consisting of these devices is organized in a multi-layer structure, a feed forward neural net with bifurcating data classification capability is formed. The interdisciplinary research will involve the collaboration with top digital computer architecture experts at the University of Southern California.

  3. Multi-layer holographic bifurcative neural network system for real-time adaptive EOS data analysis

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Huang, K. S.; Diep, J.

    1993-01-01

    Optical data processing techniques have the inherent advantage of high data throughout, low weight and low power requirements. These features are particularly desirable for onboard spacecraft in-situ real-time data analysis and data compression applications. the proposed multi-layer optical holographic neural net pattern recognition technique will utilize the nonlinear photorefractive devices for real-time adaptive learning to classify input data content and recognize unexpected features. Information can be stored either in analog or digital form in a nonlinear photofractive device. The recording can be accomplished in time scales ranging from milliseconds to microseconds. When a system consisting of these devices is organized in a multi-layer structure, a feedforward neural net with bifurcating data classification capability is formed. The interdisciplinary research will involve the collaboration with top digital computer architecture experts at the University of Southern California.

  4. Adaptive Weibull Multiplicative Model and Multilayer Perceptron Neural Networks for Dark-Spot Detection from SAR Imagery

    PubMed Central

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  5. Adaptive Weibull Multiplicative Model and Multilayer Perceptron neural networks for dark-spot detection from SAR imagery.

    PubMed

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  6. Multilayer cellular neural network and fuzzy C-mean classifiers: comparison and performance analysis

    NASA Astrophysics Data System (ADS)

    Trujillo San-Martin, Maite; Hlebarov, Vejen; Sadki, Mustapha

    2004-11-01

    Neural Networks and Fuzzy systems are considered two of the most important artificial intelligent algorithms which provide classification capabilities obtained through different learning schemas which capture knowledge and process it according to particular rule-based algorithms. These methods are especially suited to exploit the tolerance for uncertainty and vagueness in cognitive reasoning. By applying these methods with some relevant knowledge-based rules extracted using different data analysis tools, it is possible to obtain a robust classification performance for a wide range of applications. This paper will focus on non-destructive testing quality control systems, in particular, the study of metallic structures classification according to the corrosion time using a novel cellular neural network architecture, which will be explained in detail. Additionally, we will compare these results with the ones obtained using the Fuzzy C-means clustering algorithm and analyse both classifiers according to its classification capabilities.

  7. Control of Multilayer Networks

    PubMed Central

    Menichetti, Giulia; Dall’Asta, Luca; Bianconi, Ginestra

    2016-01-01

    The controllability of a network is a theoretical problem of relevance in a variety of contexts ranging from financial markets to the brain. Until now, network controllability has been characterized only on isolated networks, while the vast majority of complex systems are formed by multilayer networks. Here we build a theoretical framework for the linear controllability of multilayer networks by mapping the problem into a combinatorial matching problem. We found that correlating the external signals in the different layers can significantly reduce the multiplex network robustness to node removal, as it can be seen in conjunction with a hybrid phase transition occurring in interacting Poisson networks. Moreover we observe that multilayer networks can stabilize the fully controllable multiplex network configuration that can be stable also when the full controllability of the single network is not stable. PMID:26869210

  8. Regional application of multi-layer artificial neural networks in 3-D ionosphere tomography

    NASA Astrophysics Data System (ADS)

    Ghaffari Razin, Mir Reza; Voosoghi, Behzad

    2016-08-01

    Tomography is a very cost-effective method to study physical properties of the ionosphere. In this paper, residual minimization training neural network (RMTNN) is used in voxel-based tomography to reconstruct of 3-D ionosphere electron density with high spatial resolution. For numerical experiments, observations collected at 37 GPS stations from Iranian permanent GPS network (IPGN) are used. A smoothed TEC approach was used for absolute STEC recovery. To improve the vertical resolution, empirical orthogonal functions (EOFs) obtained from international reference ionosphere 2012 (IRI-2012) used as object function in training neural network. Ionosonde observations is used for validate reliability of the proposed method. Minimum relative error for RMTNN is 1.64% and maximum relative error is 15.61%. Also root mean square error (RMSE) of 0.17 × 1011 (electrons/m3) is computed for RMTNN which is less than RMSE of IRI2012. The results show that RMTNN has higher accuracy and compiles speed than other ionosphere reconstruction methods.

  9. Multilayer neural networks for solving a class of partial differential equations.

    PubMed

    He, S; Reif, K; Unbehauen, R

    2000-04-01

    In this paper, training the derivative of a feedforward neural network with the extended backpropagation algorithm is presented. The method is used to solve a class of first-order partial differential equations for input-to-state linearizable or approximate linearizable systems. The solution of the differential equation, together with the Lie derivatives, yields a change of coordinates. A feedback control law is then designed to keep the system in a desired behavior. The examination of the proposed method, through simulations, exhibits the advantages of it. They include easily and quickly finding approximate solutions for complicated first-order partial differential equations. Therefore, the work presented here can benefit the design of the class of nonlinear control systems, where the nontrivial solutions of the partial differential equations are difficult to find. PMID:10937971

  10. Neural Networks

    SciTech Connect

    Smith, Patrick I.

    2003-09-23

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  11. Propagation of firing rate by synchronization and coherence of firing pattern in a feed-forward multilayer neural network

    NASA Astrophysics Data System (ADS)

    Yi, Ming; Yang, Lijian

    2010-06-01

    When neurons in layer 1 fire irregularly under stochastic noise, it is found synchronous firings can develop gradually in latter layers within a feed-forward multilayer neural network, which is consistent with experimental findings. The underlying mechanism of propagation of firing rate is explored, then rate encoding realized by synchronization is clarified. Furthermore, the effects of connection probability between nearest layers, stochastic noise, and ratio of inhibitory connections to total connection on (i) propagation of firing rate by synchronization and (ii) coherence of firing pattern are investigated, respectively. It is observed that (i) there is a threshold for connection probability, beyond which firing rate of each layer can propagate successfully through the whole network by synchronization. The dependence of firing rate on layer index is very different for different connection probability. In addition, larger the connection probability is, more rapidly the synchrony is built up. (ii) Increasing intensity of stochastic noise enhances firing rate in output layer. Stochastic noise plays a constructive role in improving synchrony by causing the synchronization more quickly. (iii) The inhibitory connection offsets excitatory input therefore reduces firing rate and synchrony. As layer index increases, coherence measure goes through a peak, i.e., the coherence of firing pattern is the worst at certain a layer. With increasing the ratio of inhibitory connections, the variability of firing train is enhanced, exhibiting destructive role of inhibitory connections on coherence of firing pattern.

  12. Aitken-based acceleration methods for assessing convergence of multilayer neural networks.

    PubMed

    Pilla, R S; Kamarthi, S V; Lindsay, B G

    2001-01-01

    This paper first develops the ideas of Aitken delta(2) method to accelerate the rate of convergence of an error sequence (value of the objective function at each step) obtained by training a neural network with a sigmoidal activation function via the backpropagation algorithm. The Aitken method is exact when the error sequence is exactly geometric. However, theoretical and empirical evidence suggests that the best possible rate of convergence obtainable for such an error sequence is log-geometric. This paper develops a new invariant extended-Aitken acceleration method for accelerating log-geometric sequences. The resulting accelerated sequence enables one to predict the final value of the error function. These predictions can in turn be used to assess the distance between the current and final solution and thereby provides a stopping criterion for a desired accuracy. Each of the techniques described is applicable to a wide range of problems. The invariant extended-Aitken acceleration approach shows improved acceleration as well as outstanding prediction of the final error in the practical problems considered. PMID:18249928

  13. Multilayered perceptron neural networks to compute energy losses in magnetic cores

    NASA Astrophysics Data System (ADS)

    Kucuk, Ilker

    2006-12-01

    This paper presents a new approach based on multilayered perceptrons (MLPs) to compute the specific energy losses of toroidal wound cores built from 3% SiFe 0.27 mm thick M4, 0.1 and 0.08 mm thin gauge electrical steel strips. The MLP has been trained by a back-propagation and extended delta-bar-delta learning algorithm. The results obtained by using the MLP model were compared with a commonly used conventional method. The comparison has shown that the proposed model improved loss estimation with respect to the conventional method.

  14. Support vector machine based training of multilayer feedforward neural networks as optimized by particle swarm algorithm: application in QSAR studies of bioactivity of organic compounds.

    PubMed

    Lin, Wei-Qi; Jiang, Jian-Hui; Zhou, Yan-Ping; Wu, Hai-Long; Shen, Guo-Li; Yu, Ru-Qin

    2007-01-30

    Multilayer feedforward neural networks (MLFNNs) are important modeling techniques widely used in QSAR studies for their ability to represent nonlinear relationships between descriptors and activity. However, the problems of overfitting and premature convergence to local optima still pose great challenges in the practice of MLFNNs. To circumvent these problems, a support vector machine (SVM) based training algorithm for MLFNNs has been developed with the incorporation of particle swarm optimization (PSO). The introduction of the SVM based training mechanism imparts the developed algorithm with inherent capacity for combating the overfitting problem. Moreover, with the implementation of PSO for searching the optimal network weights, the SVM based learning algorithm shows relatively high efficiency in converging to the optima. The proposed algorithm has been evaluated using the Hansch data set. Application to QSAR studies of the activity of COX-2 inhibitors is also demonstrated. The results reveal that this technique provides superior performance to backpropagation (BP) and PSO training neural networks. PMID:17186488

  15. Meteorological Factors Related to Emergency Admission of Elderly Stroke Patients in Shanghai: Analysis with a Multilayer Perceptron Neural Network

    PubMed Central

    Meng, Guilin; Tan, Yan; Fang, Min; Yang, Hongyan; Liu, Xueyuan; Zhao, Yanxin

    2015-01-01

    Background The aim of this study was to predict the emergency admission of elderly stroke patients in Shanghai by using a multilayer perceptron (MLP) neural network. Material/Methods Patients (>60 years) with first-ever stroke registered in the Emergency Center of Neurology Department, Shanghai Tenth People’s Hospital, from January 2012 to June 2014 were enrolled into the present study. Daily climate records were obtained from the National Meteorological Office. MLP was used to model the daily emergency admission into the neurology department with meteorological factors such as wind level, weather type, daily maximum temperature, lowest temperature, average temperature, and absolute temperature difference. The relationships of meteorological factors with the emergency admission due to stroke were analyzed in an MLP model. Results In 886 days, 2180 first-onset elderly stroke patients were enrolled, and the average number of stroke patients was 2.46 per day. MLP was used to establish a model for the prediction of dates with low stroke admission (≤4) and those with high stroke admission (≥5). For the days with low stroke admission, the absolute temperature difference accounted for 40.7% of admissions, while for the days with high stroke admission, the weather types accounted for 73.3%. Conclusions Outdoor temperature and related meteorological parameters are associated with stroke attack. The absolute temperature difference and the weather types have adverse effects on stroke. Further study is needed to determine if other meteorological factors such as pollutants also play important roles in stroke attack. PMID:26590182

  16. Artificial neural network analysis of RBS data with roughness: Application to Ti 0.4Al 0.6N/Mo multilayers

    NASA Astrophysics Data System (ADS)

    Öhl, G.; Matias, V.; Vieira, A.; Barradas, N. P.

    2003-10-01

    In multilayered Ti 0.4Al 0.6N/Mo coatings, a strengthening effect can be obtained by using alternate layers of materials with high and low elastic constants. This behaviour requires a multilayer periodicity below a certain value in order to reduce dislocation motion across layer interface. Below this critical period, in most cases the hardness decreases as the period decreases. The multiple interfaces have an important role on this behaviour, working as stress relaxation areas and preventing crack propagation, influencing the mechanical properties of the system. Understanding the origin of these effects requires knowledge of the interface structure, where the interfacial roughness is of prime importance. We used Rutherford backscattering to study roughness in a quantitative way, and developed an artificial neural network algorithm dedicated to the analysis of the data. The results compare very well with previous TEM and AFM data.

  17. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the characteristics…

  18. Object reconstruction in multilayer neural network based profilometry using grating structure comprising two regions with different spatial periods

    NASA Astrophysics Data System (ADS)

    Ganotra, Dinesh; Joseph, Joby; Singh, Kehar

    2004-08-01

    Feed-forward backpropagation neural network has been used in fringe projection profilometry for reconstruction of a three-dimensional (3D) object. A grating structure comprising two regions of different spatial periods is projected on the reference surface over which the object is placed. The shorter spatial period part of the grating is projected over the object, whereas the longer spatial period part is projected on the reference surface only. 3D object shape is reconstructed with the help of neural networks using images of the projected grating. During training phase of the network, the shorter spatial period grating along with the longer spatial period grating is used. Experimental results are presented for a diffuse object, showing that the 3D shape of the object is recovered using the above-mentioned method. However, the phases wrapping takes place in Fourier transform profilometry by using only one grating of shorter spatial period.

  19. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  20. Nested Neural Networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1992-01-01

    Report presents analysis of nested neural networks, consisting of interconnected subnetworks. Analysis based on simplified mathematical models more appropriate for artificial electronic neural networks, partly applicable to biological neural networks. Nested structure allows for retrieval of individual subpatterns. Requires fewer wires and connection devices than fully connected networks, and allows for local reconstruction of damaged subnetworks without rewiring entire network.

  1. Optimization of metformin HCl 500 mg sustained release matrix tablets using Artificial Neural Network (ANN) based on Multilayer Perceptrons (MLP) model.

    PubMed

    Mandal, Uttam; Gowda, Veeran; Ghosh, Animesh; Bose, Anirbandeep; Bhaumik, Uttam; Chatterjee, Bappaditya; Pal, Tapan Kumar

    2008-02-01

    The aim of the present study was to apply the simultaneous optimization method incorporating Artificial Neural Network (ANN) using Multi-layer Perceptron (MLP) model to the development of a metformin HCl 500 mg sustained release matrix tablets with an optimized in vitro release profile. The amounts of HPMC K15M and PVP K30 at three levels (-1, 0, +1) for each were selected as casual factors. In vitro dissolution time profiles at four different sampling times (1 h, 2 h, 4 h and 8 h) were chosen as output variables. 13 kinds of metformin matrix tablets were prepared according to a 2(3) factorial design (central composite) with five extra center points, and their dissolution tests were performed. Commercially available STATISTICA Neural Network software (Stat Soft, Inc., Tulsa, OK, U.S.A.) was used throughout the study. The training process of MLP was completed until a satisfactory value of root square mean (RSM) for the test data was obtained using feed forward back propagation method. The root mean square value for the trained network was 0.000097, which indicated that the optimal MLP model was reached. The optimal tablet formulation based on some predetermined release criteria predicted by MLP was 336 mg of HPMC K15M and 130 mg of PVP K30. Calculated difference (f(1) 2.19) and similarity (f(2) 89.79) factors indicated that there was no difference between predicted and experimentally observed drug release profiles for the optimal formulation. This work illustrates the potential for an artificial neural network with MLP, to assist in development of sustained release dosage forms. PMID:18239298

  2. Modeling of gamma ray energy-absorption buildup factors for thermoluminescent dosimetric materials using multilayer perceptron neural network: A comparative study

    NASA Astrophysics Data System (ADS)

    Kucuk, Nil; Manohara, S. R.; Hanagodimath, S. M.; Gerward, L.

    2013-05-01

    In this work, multilayered perceptron neural networks (MLPNNs) were presented for the computation of the gamma-ray energy absorption buildup factors (BA) of seven thermoluminescent dosimetric (TLD) materials [LiF, BeO, Na2B4O7, CaSO4, Li2B4O7, KMgF3, Ca3(PO4)2] in the energy region 0.015-15 MeV, and for penetration depths up to 10 mfp (mean-free-path). The MLPNNs have been trained by a Levenberg-Marquardt learning algorithm. The developed model is in 99% agreement with the ANSI/ANS-6.4.3 standard data set. Furthermore, the model is fast and does not require tremendous computational efforts. The estimated BA data for TLD materials have been given with penetration depth and incident photon energy as comparative to the results of the interpolation method using the Geometrical Progression (G-P) fitting formula.

  3. Morphological neural networks

    SciTech Connect

    Ritter, G.X.; Sussner, P.

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  4. Application of design of experiments and multilayer perceptrons neural network in the optimization of diclofenac sodium extended release tablets with Carbopol 71G.

    PubMed

    Ivić, Branka; Ibrić, Svetlana; Cvetković, Nebojsa; Petrović, Aleksandra; Trajković, Svetlana; Djurić, Zorica

    2010-07-01

    The purpose of the study was to screen the effects of formulation factors on the in vitro release profile of diclofenac sodium from matrix tablets using design of experiment (DOE). Formulations of diclofenac sodium tablets, with Carbopol 71G as matrix substance, were optimized by artificial neural network. According to Central Composite Design, 10 formulations of diclofenac sodium matrix tablets were prepared. As network inputs, concentration of Carbopol 71G and the Kollidon K-25 were selected. In vitro dissolution time profiles at 5 different sampling times were chosen as responses. The independent variables and the release parameters were processed by multilayer perceptrons neural network (MLP). Results of drug release studies indicate that drug release rates vary between different formulations, with a range of 1 h to more than 8 h to complete dissolution. For two tested formulations there was no difference between experimental and MLP predicted in vitro profiles. The MLP model was optimized. The root mean square value for the trained network was 0.07%, which indicated that the optimal MLP model was reached. The optimal tablet formulation predicted by MLP was with 23% of Carbopol 71G and 0.8% of Kollidon K-25. Calculated difference factor (f(1) 7.37) and similarity factor (f(2) 70.79) indicate that there is no difference between predicted and experimentally observed drug release profiles for the optimal formulation. The satisfactory prediction of drug release for optimal formulation by the MLP in this study has shown the applicability of this optimization method in modeling extended release tablet formulation. PMID:20606343

  5. Structural reducibility of multilayer networks

    NASA Astrophysics Data System (ADS)

    de Domenico, Manlio; Nicosia, Vincenzo; Arenas, Alexandre; Latora, Vito

    2015-04-01

    Many complex systems can be represented as networks consisting of distinct types of interactions, which can be categorized as links belonging to different layers. For example, a good description of the full protein-protein interactome requires, for some organisms, up to seven distinct network layers, accounting for different genetic and physical interactions, each containing thousands of protein-protein relationships. A fundamental open question is then how many layers are indeed necessary to accurately represent the structure of a multilayered complex system. Here we introduce a method based on quantum theory to reduce the number of layers to a minimum while maximizing the distinguishability between the multilayer network and the corresponding aggregated graph. We validate our approach on synthetic benchmarks and we show that the number of informative layers in some real multilayer networks of protein-genetic interactions, social, economical and transportation systems can be reduced by up to 75%.

  6. [Multi-layer perceptron neural network based algorithm for simultaneous retrieving temperature and emissivity from hyperspectral FTIR data].

    PubMed

    Cheng, Jie; Xiao, Qing; Li, Xiao-Wen; Liu, Qin-Huo; Du, Yong-Ming

    2008-04-01

    The present paper firstly points out the defect of typical temperature and emissivity separation algorithms when dealing with hyperspectral FTIR data: the conventional temperature and emissivity algorithms can not reproduce correct emissivity value when the difference between the ground-leaving radiance and object's blackbody radiation at its true temperature and the instrument random noise are on the same order, and this phenomenon is very prone to occur rence near 714 and 1 250 cm(-1) in the field measurements. In order to settle this defect, a three-layer perceptron neural network has been introduced into the simultaneous inversion of temperature and emissivity from hyperspectral FTIR data. The soil emissivity spectra from the ASTER spectral library were used to produce the training data, the soil emissivity spectra from the MODIS spectral library were used to produce the test data, and the result of network test shows the MLP is robust. Meanwhile, the ISSTES algorithm was used to retrieve the temperature and emissivity form the test data. By comparing the results of MLP and ISSTES, we found the MLP can overcome the disadvantage of typical temperature and emisivity separation, although the rmse of derived emissivity using MLP is lower than the ISSTES as a whole. Hence, the MLP can be regarded as a beneficial complementarity of the typical temperature and emissivity separation. PMID:18619297

  7. Mathematical Formulation of Multilayer Networks

    NASA Astrophysics Data System (ADS)

    De Domenico, Manlio; Solé-Ribalta, Albert; Cozzo, Emanuele; Kivelä, Mikko; Moreno, Yamir; Porter, Mason A.; Gómez, Sergio; Arenas, Alex

    2013-10-01

    A network representation is useful for describing the structure of a large variety of complex systems. However, most real and engineered systems have multiple subsystems and layers of connectivity, and the data produced by such systems are very rich. Achieving a deep understanding of such systems necessitates generalizing “traditional” network theory, and the newfound deluge of data now makes it possible to test increasingly general frameworks for the study of networks. In particular, although adjacency matrices are useful to describe traditional single-layer networks, such a representation is insufficient for the analysis and description of multiplex and time-dependent networks. One must therefore develop a more general mathematical framework to cope with the challenges posed by multilayer complex systems. In this paper, we introduce a tensorial framework to study multilayer networks, and we discuss the generalization of several important network descriptors and dynamical processes—including degree centrality, clustering coefficients, eigenvector centrality, modularity, von Neumann entropy, and diffusion—for this framework. We examine the impact of different choices in constructing these generalizations, and we illustrate how to obtain known results for the special cases of single-layer and multiplex networks. Our tensorial approach will be helpful for tackling pressing problems in multilayer complex systems, such as inferring who is influencing whom (and by which media) in multichannel social networks and developing routing techniques for multimodal transportation systems.

  8. A consensual neural network

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Ersoy, O. K.; Swain, P. H.

    1991-01-01

    A neural network architecture called a consensual neural network (CNN) is proposed for the classification of data from multiple sources. Its relation to hierarchical and ensemble neural networks is discussed. CNN is based on the statistical consensus theory and uses nonlinearly transformed input data. The input data are transformed several times, and the different transformed data are applied as if they were independent inputs. The independent inputs are classified using stage neural networks and outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote-sensing data and geographic data are given.

  9. Nonlinear PLS modeling using neural networks

    SciTech Connect

    Qin, S.J.; McAvoy, T.J.

    1994-12-31

    This paper discusses the embedding of neural networks into the framework of the PLS (partial least squares) modeling method resulting in a neural net PLS modeling approach. By using the universal approximation property of neural networks, the PLS modeling method is genealized to a nonlinear framework. The resulting model uses neural networks to capture the nonlinearity and keeps the PLS projection to attain robust generalization property. In this paper, the standard PLS modeling method is briefly reviewed. Then a neural net PLS (NNPLS) modeling approach is proposed which incorporates feedforward networks into the PLS modeling. A multi-input-multi-output nonlinear modeling task is decomposed into linear outer relations and simple nonlinear inner relations which are performed by a number of single-input-single-output networks. Since only a small size network is trained at one time, the over-parametrized problem of the direct neural network approach is circumvented even when the training data are very sparse. A conjugate gradient learning method is employed to train the network. It is shown that, by analyzing the NNPLS algorithm, the global NNPLS model is equivalent to a multilayer feedforward network. Finally, applications of the proposed NNPLS method are presented with comparison to the standard linear PLS method and the direct neural network approach. The proposed neural net PLS method gives better prediction results than the PLS modeling method and the direct neural network approach.

  10. Exploring neural network technology

    SciTech Connect

    Naser, J.; Maulbetsch, J.

    1992-12-01

    EPRI is funding several projects to explore neural network technology, a form of artificial intelligence that some believe may mimic the way the human brain processes information. This research seeks to provide a better understanding of fundamental neural network characteristics and to identify promising utility industry applications. Results to date indicate that the unique attributes of neural networks could lead to improved monitoring, diagnostic, and control capabilities for a variety of complex utility operations. 2 figs.

  11. Advances in Artificial Neural Networks - Methodological Development and Application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  12. Interval neural networks

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

  13. Fuzzy neural network with fast backpropagation learning

    NASA Astrophysics Data System (ADS)

    Wang, Zhiling; De Sario, Marco; Guerriero, Andrea; Mugnuolo, Raffaele

    1995-03-01

    Neural filters with multilayer backpropagation network have been proved to be able to define mostly all linear or non-linear filters. Because of the slowness of the networks' convergency, however, the applicable fields have been limited. In this paper, fuzzy logic is introduced to adjust learning rate and momentum parameter depending upon output errors and training times. This makes the convergency of the network greatly improved. Test curves are shown to prove the fast filters' performance.

  14. Time series prediction using a rational fraction neural networks

    SciTech Connect

    Lee, K.; Lee, Y.C.; Barnes, C.; Aldrich, C.H.; Kindel, J.

    1988-01-01

    An efficient neural network based on a rational fraction representation has been trained to perform time series prediction. The network is a generalization of the Volterra-Wiener network while still retaining the computational efficiency of the latter. Because of the second order convergent nature of the learning algorithm, the rational net is computationally far more efficient than multilayer networks. The rational fractional representation is, however, more restrictive than the multilayer networks.

  15. Multilayer weighted social network model

    NASA Astrophysics Data System (ADS)

    Murase, Yohsuke; Török, János; Jo, Hang-Hyun; Kaski, Kimmo; Kertész, János

    2014-11-01

    Recent empirical studies using large-scale data sets have validated the Granovetter hypothesis on the structure of the society in that there are strongly wired communities connected by weak ties. However, as interaction between individuals takes place in diverse contexts, these communities turn out to be overlapping. This implies that the society has a multilayered structure, where the layers represent the different contexts. To model this structure we begin with a single-layer weighted social network (WSN) model showing the Granovetterian structure. We find that when merging such WSN models, a sufficient amount of interlayer correlation is needed to maintain the relationship between topology and link weights, while these correlations destroy the enhancement in the community overlap due to multiple layers. To resolve this, we devise a geographic multilayer WSN model, where the indirect interlayer correlations due to the geographic constraints of individuals enhance the overlaps between the communities and, at the same time, the Granovetterian structure is preserved.

  16. Evaluation of Süleymanköy (Diyarbakir, Eastern Turkey) and Seferihisar (Izmir, Western Turkey) Self Potential Anomalies with Multilayer Perceptron Neural Networks

    NASA Astrophysics Data System (ADS)

    Kaftan, Ilknur; Sindirgi, Petek

    2013-04-01

    Self-potential (SP) is one of the oldest geophysical methods that provides important information about near-surface structures. Several methods have been developed to interpret SP data using simple geometries. This study investigated inverse solution of a buried, polarized sphere-shaped self-potential (SP ) anomaly via Multilayer Perceptron Neural Networks ( MLPNN ). The polarization angle ( α ) and depth to the centre of sphere ( h )were estimated. The MLPNN is applied to synthetic and field SP data. In order to see the capability of the method in detecting the number of sources, MLPNN was applied to different spherical models at different depths and locations.. Additionally, the performance of MLPNN was tested by adding random noise to the same synthetic test data. The sphere model successfully obtained similar parameters under different S/N ratios. Then, MLPNN method was applied to two field examples. The first one is the cross section taken from the SP anomaly map of the Ergani-Süleymanköy (Turkey) copper mine. MLPNN was also applied to SP data from Seferihisar Izmir (Western Turkey) geothermal field. The MLPNN results showed good agreement with the original synthetic data set. The effect of The technique gave satisfactory results following the addition of 5% and 10% Gaussian noise levels. The MLPNN results were compared to other SP interpretation techniques, such as Normalized Full Gradient (NFG), inverse solution and nomogram methods. All of the techniques showed strong similarity. Consequently, the synthetic and field applications of this study show that MLPNN provides reliable evaluation of the self potential data modelled by the sphere model.

  17. Classification of radar clutter using neural networks.

    PubMed

    Haykin, S; Deng, C

    1991-01-01

    A classifier that incorporates both preprocessing and postprocessing procedures as well as a multilayer feedforward network (based on the back-propagation algorithm) in its design to distinguish between several major classes of radar returns including weather, birds, and aircraft is described. The classifier achieves an average classification accuracy of 89% on generalization for data collected during a single scan of the radar antenna. The procedures of feature selection for neural network training, the classifier design considerations, the learning algorithm development, the implementation, and the experimental results of the neural clutter classifier, which is simulated on a Warp systolic computer, are discussed. A comparative evaluation of the multilayer neural network with a traditional Bayes classifier is presented. PMID:18282874

  18. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  19. Critical Branching Neural Networks

    ERIC Educational Resources Information Center

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  20. Neural network applications

    NASA Technical Reports Server (NTRS)

    Padgett, Mary L.; Desai, Utpal; Roppel, T.A.; White, Charles R.

    1993-01-01

    A design procedure is suggested for neural networks which accommodates the inclusion of such knowledge-based systems techniques as fuzzy logic and pairwise comparisons. The use of these procedures in the design of applications combines qualitative and quantitative factors with empirical data to yield a model with justifiable design and parameter selection procedures. The procedure is especially relevant to areas of back-propagation neural network design which are highly responsive to the use of precisely recorded expert knowledge.

  1. Science of artificial neural networks; Proceedings of the Meeting, Orlando, FL, Apr. 21-24, 1992

    SciTech Connect

    Ruck, D.W.

    1992-01-01

    The present conference discusses high-order neural networks with adaptive architecture, a parallel cascaded one-step learning machine, stretch and hammer neural networks, visual grammars for neural networks, the net pruning of a multilayer perceptron, neural correlates of the sensorial and cognitive control of behavior, neural nets for massively parallel optimization, parametric and additive perturbations for global optimization, design rules for multilayer perceptrons, the negative transfer problem in neural networks, and a vision-based neural multimap pattern recognition architecture. Also discussed are function prediction with recurrent neural networks, fuzzy neural computing systems, edge detection via fuzzy neural networks, modeling confusion for autonomous systems, self-organization by fuzzy clustering, neural nets in information retrieval, neighborhoods and trajectories in Kohonen maps, the random structure of error surfaces, and conceptual recognition by neural networks.

  2. Coronary Artery Diagnosis Aided by Neural Network

    NASA Astrophysics Data System (ADS)

    Stefko, Kamil

    2007-01-01

    Coronary artery disease is due to atheromatous narrowing and subsequent occlusion of the coronary vessel. Application of optimised feed forward multi-layer back propagation neural network (MLBP) for detection of narrowing in coronary artery vessels is presented in this paper. The research was performed using 580 data records from traditional ECG exercise test confirmed by coronary arteriography results. Each record of training database included description of the state of a patient providing input data for the neural network. Level and slope of ST segment of a 12 lead ECG signal recorded at rest and after effort (48 floating point values) was the main component of input data for neural network was. Coronary arteriography results (verified the existence or absence of more than 50% stenosis of the particular coronary vessels) were used as a correct neural network training output pattern. More than 96% of cases were correctly recognised by especially optimised and a thoroughly verified neural network. Leave one out method was used for neural network verification so 580 data records could be used for training as well as for verification of neural network.

  3. Multilayer Kohonen network and its separability analysis

    NASA Astrophysics Data System (ADS)

    Liu, Chao-yuan; Li, Jie-Gu

    1995-04-01

    This paper presents a model of a multilayer Kohonen network. Because of obeying the winner- take-all learning rule and projecting high dimensional patterns into one or two dimensional space, the conventional Kohonen network has many limitations in its applications, such as pattern separability limitation and open ended limitation. Taking advantage of the innovation for learning method and its multilayer structure, the multilayer Kohonen network has the performance of nonlinear pattern partition. Owing to labeling pattern clusters with appropriate category names or numbers only, the network is an open ended system, so it is far more powerful than the conventional Kohonen network. The mechanism of the multilayer Kohonen network is explained in detail, and its nonlinear pattern separability is analyzed theoretically. As a result of an experiment made by two layer Kohonen network, a set of human head contour figures assigned into diverse by categories is shown.

  4. Failure behavior identification for a space antenna via neural networks

    NASA Technical Reports Server (NTRS)

    Sartori, Michael A.; Antsaklis, Panos J.

    1992-01-01

    By using neural networks, a method for the failure behavior identification of a space antenna model is investigated. The proposed method uses three stages. If a fault is suspected by the first stage of fault detection, a diagnostic test is performed on the antenna. The diagnostic test results are used by the second and third stages to identify which fault occurred and to diagnose the extent of the fault, respectively. The first stage uses a multilayer perceptron, the second stage uses a multilayer perceptron and neural networks trained with the quadratic optimization algorithm, a novel training procedure, and the third stage uses backpropagation trained neural networks.

  5. Neural network tomography: network replication from output surface geometry.

    PubMed

    Minnett, Rupert C J; Smith, Andrew T; Lennon, William C; Hecht-Nielsen, Robert

    2011-06-01

    Multilayer perceptron networks whose outputs consist of affine combinations of hidden units using the tanh activation function are universal function approximators and are used for regression, typically by reducing the MSE with backpropagation. We present a neural network weight learning algorithm that directly positions the hidden units within input space by numerically analyzing the curvature of the output surface. Our results show that under some sampling requirements, this method can reliably recover the parameters of a neural network used to generate a data set. PMID:21377326

  6. Hyperbolic Hopfield neural networks.

    PubMed

    Kobayashi, M

    2013-02-01

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states. PMID:24808287

  7. Nested neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    Nested neural networks, consisting of small interconnected subnetworks, allow for the storage and retrieval of neural state patterns of different sizes. The subnetworks are naturally categorized by layers of corresponding to spatial frequencies in the pattern field. The storage capacity and the error correction capability of the subnetworks generally increase with the degree of connectivity between layers (the nesting degree). Storage of only few subpatterns in each subnetworks results in a vast storage capacity of patterns and subpatterns in the nested network, maintaining high stability and error correction capability.

  8. Optical-Correlator Neural Network Based On Neocognitron

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  9. Neural networks: a biased overview

    SciTech Connect

    Domany, E.

    1988-06-01

    An overview of recent activity in the field of neural networks is presented. The long-range aim of this research is to understand how the brain works. First some of the problems are stated and terminology defined; then an attempt is made to explain why physicists are drawn to the field, and their main potential contribution. In particular, in recent years some interesting models have been introduced by physicists. A small subset of these models is described, with particular emphasis on those that are analytically soluble. Finally a brief review of the history and recent developments of single- and multilayer perceptrons is given, bringing the situation up to date regarding the central immediate problem of the field: search for a learning algorithm that has an associated convergence theorem.

  10. Neural Networks and Micromechanics

    NASA Astrophysics Data System (ADS)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  11. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  12. Program PSNN (Plasma Spectroscopy Neural Network)

    SciTech Connect

    Morgan, W.L.; Larsen, J.T.

    1993-08-01

    This program uses the standard ``delta rule`` back-propagation supervised training algorithm for multi-layer neural networks. The inputs are line intensities in arbitrary units, which are then normalized within the program. The outputs are T{sub e}(eV), N{sub e}(cm{sup {minus}3}), and a fractional ionization, which in our testing using H- and He-like spectra, was N(He)/[N(H) + N(He)].

  13. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  14. Web traffic prediction with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Gluszek, Adam; Kekez, Michal; Rudzinski, Filip

    2005-02-01

    The main aim of the paper is to present application of the artificial neural network in the web traffic prediction. First, the general problem of time series modelling and forecasting is shortly described. Next, the details of building of dynamic processes models with the neural networks are discussed. At this point determination of the model structure in terms of its inputs and outputs is the most important question because this structure is a rough approximation of the dynamics of the modelled process. The following section of the paper presents the results obtained applying artificial neural network (classical multilayer perceptron trained with backpropagation algorithm) to the real-world web traffic prediction. Finally, we discuss the results, describe weak points of presented method and propose some alternative approaches.

  15. Neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  16. Neural network construction via back-propagation

    SciTech Connect

    Burwick, T.T.

    1994-06-01

    A method is presented that combines back-propagation with multi-layer neural network construction. Back-propagation is used not only to adjust the weights but also the signal functions. Going from one network to an equivalent one that has additional linear units, the non-linearity of these units and thus their effective presence is then introduced via back-propagation (weight-splitting). The back-propagated error causes the network to include new units in order to minimize the error function. We also show how this formalism allows to escape local minima.

  17. Parallel processing neural networks

    SciTech Connect

    Zargham, M.

    1988-09-01

    A model for Neural Network which is based on a particular kind of Petri Net has been introduced. The model has been implemented in C and runs on the Sequent Balance 8000 multiprocessor, however it can be directly ported to different multiprocessor environments. The potential advantages of using Petri Nets include: (1) the overall system is often easier to understand due to the graphical and precise nature of the representation scheme, (2) the behavior of the system can be analyzed using Petri Net theory. Though, the Petri Net is an obvious choice as a basis for the model, the basic Petri Net definition is not adequate to represent the neuronal system. To eliminate certain inadequacies more information has been added to the Petri Net model. In the model, a token represents either a processor or a post synaptic potential. Progress through a particular Neural Network is thus graphically depicted in the movement of the processor tokens through the Petri Net.

  18. An introduction to neural networks: A tutorial

    SciTech Connect

    Walker, J.L.; Hill, E.V.K.

    1994-12-31

    Neural networks are a powerful set of mathematical techniques used for solving linear and nonlinear classification and prediction (function approximation) problems. Inspired by studies of the brain, these series and parallel combinations of simple functional units called artificial neurons have the ability to learn or be trained to solve very complex problems. Fundamental aspects of artificial neurons are discussed, including their activation functions, their combination into multilayer feedforward networks with hidden layers, and the use of bias neurons to reduce training time. The back propagation (of errors) paradigm for supervised training of feedforward networks is explained. Then, the architecture and mathematics of a Kohonen self organizing map for unsupervised learning are discussed. Two example problems are given. The first is for the application of a back propagation neural network to learn the correct response to an input vector using supervised training. The second is a classification problem using a self organizing map and unsupervised training.

  19. Neural networks for triggering

    SciTech Connect

    Denby, B. ); Campbell, M. ); Bedeschi, F. ); Chriss, N.; Bowers, C. ); Nesti, F. )

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  20. Uniformly sparse neural networks

    NASA Astrophysics Data System (ADS)

    Haghighi, Siamack

    1992-07-01

    Application of neural networks to problems with a large number of sensory inputs is severely limited when the processing elements (PEs) need to be fully connected. This paper presents a new network model in which a trade off between the number of connections to a node and the number of processing layers can be made. This trade off is an important issue in the VLSI implementation of neural networks. The performance and capability of a hierarchical pyramidal network architecture of limited fan-in PE layers is analyzed. Analysis of this architecture requires the development of a new learning rule, since each PE has access to limited information about the entire network input. A spatially local unsupervised training rule is developed in which each PE optimizes the fraction of its output variance contributed by input correlations, resulting in PEs behaving as adaptive local correlation detectors. It is also shown that the output of a PE optimally represents the mutual information among the inputs to that PE. Applications of the developed model in image compression and motion detection are presented.

  1. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  2. Training Feedforward Neural Networks: An Algorithm Giving Improved Generalization.

    PubMed

    Lee, Charles W.

    1997-01-01

    An algorithm is derived for supervised training in multilayer feedforward neural networks. Relative to the gradient descent backpropagation algorithm it appears to give both faster convergence and improved generalization, whilst preserving the system of backpropagating errors through the network. Copyright 1996 Elsevier Science Ltd. PMID:12662887

  3. Landslide susceptibility assesssment in the Uttarakhand area (India) using GIS: a comparison study of prediction capability of naïve bayes, multilayer perceptron neural networks, and functional trees methods

    NASA Astrophysics Data System (ADS)

    Pham, Binh Thai; Tien Bui, Dieu; Pourghasemi, Hamid Reza; Indra, Prakash; Dholakia, M. B.

    2015-12-01

    The objective of this study is to make a comparison of the prediction performance of three techniques, Functional Trees (FT), Multilayer Perceptron Neural Networks (MLP Neural Nets), and Naïve Bayes (NB) for landslide susceptibility assessment at the Uttarakhand Area (India). Firstly, a landslide inventory map with 430 landslide locations in the study area was constructed from various sources. Landslide locations were then randomly split into two parts (i) 70 % landslide locations being used for training models (ii) 30 % landslide locations being employed for validation process. Secondly, a total of eleven landslide conditioning factors including slope angle, slope aspect, elevation, curvature, lithology, soil, land cover, distance to roads, distance to lineaments, distance to rivers, and rainfall were used in the analysis to elucidate the spatial relationship between these factors and landslide occurrences. Feature selection of Linear Support Vector Machine (LSVM) algorithm was employed to assess the prediction capability of these conditioning factors on landslide models. Subsequently, the NB, MLP Neural Nets, and FT models were constructed using training dataset. Finally, success rate and predictive rate curves were employed to validate and compare the predictive capability of three used models. Overall, all the three models performed very well for landslide susceptibility assessment. Out of these models, the MLP Neural Nets and the FT models had almost the same predictive capability whereas the MLP Neural Nets (AUC = 0.850) was slightly better than the FT model (AUC = 0.849). The NB model (AUC = 0.838) had the lowest predictive capability compared to other models. Landslide susceptibility maps were final developed using these three models. These maps would be helpful to planners and engineers for the development activities and land-use planning.

  4. Neural networks for self-learning control systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Derrick H.; Widrow, Bernard

    1990-01-01

    It is shown how a neural network can learn of its own accord to control a nonlinear dynamic system. An emulator, a multilayered neural network, learns to identify the system's dynamic characteristics. The controller, another multilayered neural network, next learns to control the emulator. The self-trained controller is then used to control the actual dynamic system. The learning process continues as the emulator and controller improve and track the physical process. An example is given to illustrate these ideas. The 'truck backer-upper,' a neural network controller that steers a trailer truck while the truck is backing up to a loading dock, is demonstrated. The controller is able to guide the truck to the dock from almost any initial position. The technique explored should be applicable to a wide variety of nonlinear control problems.

  5. Forecasting PM10 in Algiers: efficacy of multilayer perceptron networks.

    PubMed

    Abderrahim, Hamza; Chellali, Mohammed Reda; Hamou, Ahmed

    2016-01-01

    Air quality forecasting system has acquired high importance in atmospheric pollution due to its negative impacts on the environment and human health. The artificial neural network is one of the most common soft computing methods that can be pragmatic for carving such complex problem. In this paper, we used a multilayer perceptron neural network to forecast the daily averaged concentration of the respirable suspended particulates with aerodynamic diameter of not more than 10 μm (PM10) in Algiers, Algeria. The data for training and testing the network are based on the data sampled from 2002 to 2006 collected by SAMASAFIA network center at El Hamma station. The meteorological data, air temperature, relative humidity, and wind speed, are used as inputs network parameters in the formation of model. The training patterns used correspond to 41 days data. The performance of the developed models was evaluated on the basis index of agreement and other statistical parameters. It was seen that the overall performance of model with 15 neurons is better than the ones with 5 and 10 neurons. The results of multilayer network with as few as one hidden layer and 15 neurons were quite reasonable than the ones with 5 and 10 neurons. Finally, an error around 9% has been reached. PMID:26381787

  6. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  7. Neural network method for characterizing video cameras

    NASA Astrophysics Data System (ADS)

    Zhou, Shuangquan; Zhao, Dazun

    1998-08-01

    This paper presents a neural network method for characterizing color video camera. A multilayer feedforward network with the error back-propagation learning rule for training, is used as a nonlinear transformer to model a camera, which realizes a mapping from the CIELAB color space to RGB color space. With SONY video camera, D65 illuminant, Pritchard Spectroradiometer, 410 JIS color charts as training data and 36 charts as testing data, results show that the mean error of training data is 2.9 and that of testing data is 4.0 in a 2563 RGB space.

  8. AUTOMATED DEFECT CLASSIFICATION USING AN ARTIFICIAL NEURAL NETWORK

    SciTech Connect

    Chady, T.; Caryk, M.; Piekarczyk, B.

    2009-03-03

    The automated defect classification algorithm based on artificial neural network with multilayer backpropagation structure was utilized. The selected features of flaws were used as input data. In order to train the neural network it is necessary to prepare learning data which is representative database of defects. Database preparation requires the following steps: image acquisition and pre-processing, image enhancement, defect detection and feature extraction. The real digital radiographs of welded parts of a ship were used for this purpose.

  9. Classification of Magneto-Optic Images using Neural Networks

    NASA Technical Reports Server (NTRS)

    Nath, Shridhar; Wincheski, Buzz; Fulton, Jim; Namkung, Min

    1994-01-01

    A real time imaging system with a neural network classifier has been incorporated on a Macintosh computer in conjunction with an MOI system. This system images rivets on aircraft aluminium structures using eddy currents and magnetic imaging. Moment invariant functions from the image of a rivet is used to train a multilayer perceptron neural network to classify the rivets as good or bad (rivets with cracks).

  10. Automated Defect Classification Using AN Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Chady, T.; Caryk, M.; Piekarczyk, B.

    2009-03-01

    The automated defect classification algorithm based on artificial neural network with multilayer backpropagation structure was utilized. The selected features of flaws were used as input data. In order to train the neural network it is necessary to prepare learning data which is representative database of defects. Database preparation requires the following steps: image acquisition and pre-processing, image enhancement, defect detection and feature extraction. The real digital radiographs of welded parts of a ship were used for this purpose.

  11. Space-Time Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  12. Application of adaptive boosting to EP-derived multilayer feed-forward neural networks (MLFN) to improve benign/malignant breast cancer classification

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Masters, Timothy D.; Lo, Joseph Y.; McKee, Dan

    2001-07-01

    A new neural network technology was developed for improving the benign/malignant diagnosis of breast cancer using mammogram findings. A new paradigm, Adaptive Boosting (AB), uses a markedly different theory in solutioning Computational Intelligence (CI) problems. AB, a new machine learning paradigm, focuses on finding weak learning algorithm(s) that initially need to provide slightly better than random performance (i.e., approximately 55%) when processing a mammogram training set. Then, by successive development of additional architectures (using the mammogram training set), the adaptive boosting process improves the performance of the basic Evolutionary Programming derived neural network architectures. The results of these several EP-derived hybrid architectures are then intelligently combined and tested using a similar validation mammogram data set. Optimization focused on improving specificity and positive predictive value at very high sensitivities, where an analysis of the performance of the hybrid would be most meaningful. Using the DUKE mammogram database of 500 biopsy proven samples, on average this hybrid was able to achieve (under statistical 5-fold cross-validation) a specificity of 48.3% and a positive predictive value (PPV) of 51.8% while maintaining 100% sensitivity. At 97% sensitivity, a specificity of 56.6% and a PPV of 55.8% were obtained.

  13. Optimization of a multilayer neural network by using minimal redundancy maximal relevance-partial mutual information clustering with least square regression.

    PubMed

    Chen, Chao; Yan, Xuefeng

    2015-06-01

    In this paper, an optimized multilayer feed-forward network (MLFN) is developed to construct a soft sensor for controlling naphtha dry point. To overcome the two main flaws in the structure and weight of MLFNs, which are trained by a back-propagation learning algorithm, minimal redundancy maximal relevance-partial mutual information clustering (mPMIc) integrated with least square regression (LSR) is proposed to optimize the MLFN. The mPMIc can determine the location of hidden layer nodes using information in the hidden and output layers, as well as remove redundant hidden layer nodes. These selected nodes are highly related to output data, but are minimally correlated with other hidden layer nodes. The weights between the selected hidden layer nodes and output layer are then updated through LSR. When the redundant nodes from the hidden layer are removed, the ideal MLFN structure can be obtained according to the test error results. In actual applications, the naphtha dry point must be controlled accurately because it strongly affects the production yield and the stability of subsequent operational processes. The mPMIc-LSR MLFN with a simple network size performs better than other improved MLFN variants and existing efficient models. PMID:25055386

  14. a Heterosynaptic Learning Rule for Neural Networks

    NASA Astrophysics Data System (ADS)

    Emmert-Streib, Frank

    In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.

  15. Evolutionary games on multilayer networks: a colloquium

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Wang, Lin; Szolnoki, Attila; Perc, Matjaž

    2015-05-01

    Networks form the backbone of many complex systems, ranging from the Internet to human societies. Accordingly, not only is the range of our interactions limited and thus best described and modeled by networks, it is also a fact that the networks that are an integral part of such models are often interdependent or even interconnected. Networks of networks or multilayer networks are therefore a more apt description of social systems. This colloquium is devoted to evolutionary games on multilayer networks, and in particular to the evolution of cooperation as one of the main pillars of modern human societies. We first give an overview of the most significant conceptual differences between single-layer and multilayer networks, and we provide basic definitions and a classification of the most commonly used terms. Subsequently, we review fascinating and counterintuitive evolutionary outcomes that emerge due to different types of interdependencies between otherwise independent populations. The focus is on coupling through the utilities of players, through the flow of information, as well as through the popularity of different strategies on different network layers. The colloquium highlights the importance of pattern formation and collective behavior for the promotion of cooperation under adverse conditions, as well as the synergies between network science and evolutionary game theory.

  16. Model neural networks

    SciTech Connect

    Kepler, T.B.

    1989-01-01

    After a brief introduction to the techniques and philosophy of neural network modeling by spin glass inspired system, the author investigates several properties of these discrete models for autoassociative memory. Memories are represented as patterns of neural activity; their traces are stored in a distributed manner in the matrix of synaptic coupling strengths. Recall is dynamic, an initial state containing partial information about one of the memories evolves toward that memory. Activity in each neuron creates fields at every other neuron, the sum total of which determines its activity. By averaging over the space of interaction matrices with memory constraints enforced by the choice of measure, we show that the exist universality classes defined by families of field distributions and the associated network capacities. He demonstrates the dominant role played by the field distribution in determining the size of the domains of attraction and present, in two independent ways, an expression for this size. He presents a class of convergent learning algorithms which improve upon known algorithms for producing such interaction matrices. He demonstrates that spurious states, or unexperienced memories, may be practically suppressed by the inducement of n-cycles and chaos. He investigates aspects of chaos in these systems, and then leave discrete modeling to implement the analysis of chaotic behavior on a continuous valued network realized in electronic hardware. In each section he combine analytical calculation and computer simulations.

  17. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  18. A multi-layer network approach to MEG connectivity analysis

    PubMed Central

    Brookes, Matthew J.; Tewarie, Prejaas K.; Hunt, Benjamin A.E.; Robson, Sian E.; Gascoyne, Lauren E.; Liddle, Elizabeth B.; Liddle, Peter F.; Morris, Peter G.

    2016-01-01

    Recent years have shown the critical importance of inter-regional neural network connectivity in supporting healthy brain function. Such connectivity is measurable using neuroimaging techniques such as MEG, however the richness of the electrophysiological signal makes gaining a complete picture challenging. Specifically, connectivity can be calculated as statistical interdependencies between neural oscillations within a large range of different frequency bands. Further, connectivity can be computed between frequency bands. This pan-spectral network hierarchy likely helps to mediate simultaneous formation of multiple brain networks, which support ongoing task demand. However, to date it has been largely overlooked, with many electrophysiological functional connectivity studies treating individual frequency bands in isolation. Here, we combine oscillatory envelope based functional connectivity metrics with a multi-layer network framework in order to derive a more complete picture of connectivity within and between frequencies. We test this methodology using MEG data recorded during a visuomotor task, highlighting simultaneous and transient formation of motor networks in the beta band, visual networks in the gamma band and a beta to gamma interaction. Having tested our method, we use it to demonstrate differences in occipital alpha band connectivity in patients with schizophrenia compared to healthy controls. We further show that these connectivity differences are predictive of the severity of persistent symptoms of the disease, highlighting their clinical relevance. Our findings demonstrate the unique potential of MEG to characterise neural network formation and dissolution. Further, we add weight to the argument that dysconnectivity is a core feature of the neuropathology underlying schizophrenia. PMID:26908313

  19. A multi-layer network approach to MEG connectivity analysis.

    PubMed

    Brookes, Matthew J; Tewarie, Prejaas K; Hunt, Benjamin A E; Robson, Sian E; Gascoyne, Lauren E; Liddle, Elizabeth B; Liddle, Peter F; Morris, Peter G

    2016-05-15

    Recent years have shown the critical importance of inter-regional neural network connectivity in supporting healthy brain function. Such connectivity is measurable using neuroimaging techniques such as MEG, however the richness of the electrophysiological signal makes gaining a complete picture challenging. Specifically, connectivity can be calculated as statistical interdependencies between neural oscillations within a large range of different frequency bands. Further, connectivity can be computed between frequency bands. This pan-spectral network hierarchy likely helps to mediate simultaneous formation of multiple brain networks, which support ongoing task demand. However, to date it has been largely overlooked, with many electrophysiological functional connectivity studies treating individual frequency bands in isolation. Here, we combine oscillatory envelope based functional connectivity metrics with a multi-layer network framework in order to derive a more complete picture of connectivity within and between frequencies. We test this methodology using MEG data recorded during a visuomotor task, highlighting simultaneous and transient formation of motor networks in the beta band, visual networks in the gamma band and a beta to gamma interaction. Having tested our method, we use it to demonstrate differences in occipital alpha band connectivity in patients with schizophrenia compared to healthy controls. We further show that these connectivity differences are predictive of the severity of persistent symptoms of the disease, highlighting their clinical relevance. Our findings demonstrate the unique potential of MEG to characterise neural network formation and dissolution. Further, we add weight to the argument that dysconnectivity is a core feature of the neuropathology underlying schizophrenia. PMID:26908313

  20. Inverse kinematics problem in robotics using neural networks

    NASA Technical Reports Server (NTRS)

    Choi, Benjamin B.; Lawrence, Charles

    1992-01-01

    In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.

  1. The structure and dynamics of multilayer networks

    NASA Astrophysics Data System (ADS)

    Boccaletti, S.; Bianconi, G.; Criado, R.; del Genio, C. I.; Gómez-Gardeñes, J.; Romance, M.; Sendiña-Nadal, I.; Wang, Z.; Zanin, M.

    2014-11-01

    In the past years, network theory has successfully characterized the interaction among the constituents of a variety of complex systems, ranging from biological to technological, and social systems. However, up until recently, attention was almost exclusively given to networks in which all components were treated on equivalent footing, while neglecting all the extra information about the temporal- or context-related properties of the interactions under study. Only in the last years, taking advantage of the enhanced resolution in real data sets, network scientists have directed their interest to the multiplex character of real-world systems, and explicitly considered the time-varying and multilayer nature of networks. We offer here a comprehensive review on both structural and dynamical organization of graphs made of diverse relationships (layers) between its constituents, and cover several relevant issues, from a full redefinition of the basic structural measures, to understanding how the multilayer nature of the network affects processes and dynamics.

  2. Interacting neural networks.

    PubMed

    Metzler, R; Kinzel, W; Kanter, I

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random. PMID:11088736

  3. Dynamic interactions in neural networks

    SciTech Connect

    Arbib, M.A. ); Amari, S. )

    1989-01-01

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  4. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  5. Neural Networks for the Beginner.

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  6. Privacy-preserving backpropagation neural network learning.

    PubMed

    Chen, Tingting; Zhong, Sheng

    2009-10-01

    With the development of distributed computing environment , many learning problems now have to deal with distributed input data. To enhance cooperations in learning, it is important to address the privacy concern of each data holder by extending the privacy preservation notion to original learning algorithms. In this paper, we focus on preserving the privacy in an important learning model, multilayer neural networks. We present a privacy-preserving two-party distributed algorithm of backpropagation which allows a neural network to be trained without requiring either party to reveal her data to the other. We provide complete correctness and security analysis of our algorithms. The effectiveness of our algorithms is verified by experiments on various real world data sets. PMID:19709975

  7. Random walk centrality in interconnected multilayer networks

    NASA Astrophysics Data System (ADS)

    Solé-Ribalta, Albert; De Domenico, Manlio; Gómez, Sergio; Arenas, Alex

    2016-06-01

    Real-world complex systems exhibit multiple levels of relationships. In many cases they require to be modeled as interconnected multilayer networks, characterizing interactions of several types simultaneously. It is of crucial importance in many fields, from economics to biology and from urban planning to social sciences, to identify the most (or the less) influent nodes in a network using centrality measures. However, defining the centrality of actors in interconnected complex networks is not trivial. In this paper, we rely on the tensorial formalism recently proposed to characterize and investigate this kind of complex topologies, and extend two well known random walk centrality measures, the random walk betweenness and closeness centrality, to interconnected multilayer networks. For each of the measures we provide analytical expressions that completely agree with numerically results.

  8. Neural Network Development Tool (NETS)

    NASA Technical Reports Server (NTRS)

    Baffes, Paul T.

    1990-01-01

    Artificial neural networks formed from hundreds or thousands of simulated neurons, connected in manner similar to that in human brain. Such network models learning behavior. Using NETS involves translating problem to be solved into input/output pairs, designing network configuration, and training network. Written in C.

  9. Color control of printers by neural networks

    NASA Astrophysics Data System (ADS)

    Tominaga, Shoji

    1998-07-01

    A method is proposed for solving the mapping problem from the 3D color space to the 4D CMYK space of printer ink signals by means of a neural network. The CIE-L*a*b* color system is used as the device-independent color space. The color reproduction problem is considered as the problem of controlling an unknown static system with four inputs and three outputs. A controller determines the CMYK signals necessary to produce the desired L*a*b* values with a given printer. Our solution method for this control problem is based on a two-phase procedure which eliminates the need for UCR and GCR. The first phase determines a neural network as a model of the given printer, and the second phase determines the combined neural network system by combining the printer model and the controller in such a way that it represents an identity mapping in the L*a*b* color space. Then the network of the controller part realizes the mapping from the L*a*b* space to the CMYK space. Practical algorithms are presented in the form of multilayer feedforward networks. The feasibility of the proposed method is shown in experiments using a dye sublimation printer and an ink jet printer.

  10. Neural network classifier of attacks in IP telephony

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  11. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  12. Deinterlacing using modular neural network

    NASA Astrophysics Data System (ADS)

    Woo, Dong H.; Eom, Il K.; Kim, Yoo S.

    2004-05-01

    Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.

  13. Computational capabilities of recurrent NARX neural networks.

    PubMed

    Siegelmann, H T; Horne, B G; Giles, C L

    1997-01-01

    Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=Psi(u(t-n(u)), ..., u(t-1), u(t), y(t-n(y)), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n(u) and n(y) are the input and output order, and the function Psi is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power. PMID:18255858

  14. Auto-clustering of mugshots using multilayer Kohonen network

    NASA Astrophysics Data System (ADS)

    Liu, Chao-yuan; Li, Jie-Gu

    1995-03-01

    This paper proposes a multi-layer neural network system to classify police mugshots according to the contours of the heads. In order to efficiently acquire enough information from the mugshots, an interactive algorithm performing image pre-processing including segmentation and curve fitting is presented, by which the contours of the human heads are extracted. From the contours obtained, a set of feature vectors consisting of 16 normalized measures is gathered. Since the feature vectors are distributed non-linearly separable in Hilbert space, a two layer Kohonen network is implemented to cluster these vectors. It has been demonstrated and proved that the multi-layer Kohonen network has a performance of non-linear partition, so it has more powerful pattern separability than conventional Kohonen network. Meanwhile, the fact that two layer Kohonen network is enough for dealing with the current non-linear partition problem is expressed. About 100 samples of mugshots are involved in the research, and the results are given.

  15. Correcting wave predictions with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Makarynskyy, O.; Makarynska, D.

    2003-04-01

    The predictions of wind waves with different lead times are necessary in a large scope of coastal and open ocean activities. Numerical wave models, which usually provide this information, are based on deterministic equations that do not entirely account for the complexity and uncertainty of the wave generation and dissipation processes. An attempt to improve wave parameters short-term forecasts based on artificial neural networks is reported. In recent years, artificial neural networks have been used in a number of coastal engineering applications due to their ability to approximate the nonlinear mathematical behavior without a priori knowledge of interrelations among the elements within a system. The common multilayer feed-forward networks, with a nonlinear transfer functions in the hidden layers, were developed and employed to forecast the wave characteristics over one hour intervals starting from one up to 24 hours, and to correct these predictions. Three non-overlapping data sets of wave characteristics, both from a buoy, moored roughly 60 miles west of the Aran Islands, west coast of Ireland, were used to train and validate the neural nets involved. The networks were trained with error back propagation algorithm. Time series plots and scatterplots of the wave characteristics as well as tables with statistics show an improvement of the results achieved due to the correction procedure employed.

  16. Inversion of surface parameters using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Olvera, J.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A neural network approach to the inversion of surface scattering parameters is presented. Simulated data sets based on a surface scattering model are used so that the data may be viewed as taken from a completely known randomly rough surface. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) are tested on the simulated backscattering data. The RMS error of training the FL network is found to be less than one half the error of the BP network while requiring one to two orders of magnitude less CPU time. When applied to inversion of parameters from a statistically rough surface, the FL method is successful at recovering the surface permittivity, the surface correlation length, and the RMS surface height in less time and with less error than the BP network. Further applications of the FL neural network to the inversion of parameters from backscatter measurements of an inhomogeneous layer above a half space are shown.

  17. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  18. Neural Networks for Readability Analysis.

    ERIC Educational Resources Information Center

    McEneaney, John E.

    This paper describes and reports on the performance of six related artificial neural networks that have been developed for the purpose of readability analysis. Two networks employ counts of linguistic variables that simulate a traditional regression-based approach to readability. The remaining networks determine readability from "visual snapshots"…

  19. Estimation of bullet striation similarity using neural networks.

    PubMed

    Banno, Atsuhiko

    2004-05-01

    A new method that searches for similar striation patterns using neural networks is described. Neural networks have been developed based on the human brain, which is good at pattern recognition. Therefore, neural networks would be expected to be effective in identifying striated toolmarks on bullets. The neural networks used in this study deal with binary signals derived from striation images. This signal plays a significant role in identification, because this signal is the key to the individually of the striations. The neural network searches a database for similar striations by means of these binary signals. The neural network used here is a multilayer network consisting of 96 neurons in the input layer, 15 neurons in the middle, and one neuron in the output layer. Two signals are inputted into the network and a score is estimated based on the similarity of these signals. For this purpose, the network is assigned to a previous learning. To initially test the validity of the procedure, the network identifies artificial patterns that are randomly produced on a personal computer. The results were acceptable and showed robustness for the deformation of patterns. Moreover, with ten unidentified bullets and ten database bullets, the network consistently was able to select the correct pair. PMID:15171166

  20. Multilayer network decoding versatility and trust

    NASA Astrophysics Data System (ADS)

    Sarkar, Camellia; Yadav, Alok; Jalan, Sarika

    2016-01-01

    In the recent years, the multilayer networks have increasingly been realized as a more realistic framework to understand emergent physical phenomena in complex real-world systems. We analyze massive time-varying social data drawn from the largest film industry of the world under a multilayer network framework. The framework enables us to evaluate the versatility of actors, which turns out to be an intrinsic property of lead actors. Versatility in dimers suggests that working with different types of nodes are more beneficial than with similar ones. However, the triangles yield a different relation between type of co-actor and the success of lead nodes indicating the importance of higher-order motifs in understanding the properties of the underlying system. Furthermore, despite the degree-degree correlations of entire networks being neutral, multilayering picks up different values of correlation indicating positive connotations like trust, in the recent years. The analysis of weak ties of the industry uncovers nodes from a lower-degree regime being important in linking Bollywood clusters. The framework and the tools used herein may be used for unraveling the complexity of other real-world systems.

  1. Neural Networks Of VLSI Components

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1991-01-01

    Concept for design of electronic neural network calls for assembly of very-large-scale integrated (VLSI) circuits of few standard types. Each VLSI chip, which contains both analog and digital circuitry, used in modular or "building-block" fashion by interconnecting it in any of variety of ways with other chips. Feedforward neural network in typical situation operates under control of host computer and receives inputs from, and sends outputs to, other equipment.

  2. Correlational Neural Networks.

    PubMed

    Chandar, Sarath; Khapra, Mitesh M; Larochelle, Hugo; Ravindran, Balaraman

    2016-02-01

    Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches. PMID:26654210

  3. Sea ice classification using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.

  4. Neural-Network-Development Program

    NASA Technical Reports Server (NTRS)

    Phillips, Todd A.

    1993-01-01

    NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.

  5. Financial Time Series Prediction Using Spiking Neural Networks

    PubMed Central

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  6. Financial time series prediction using spiking neural networks.

    PubMed

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  7. Shale Gas reservoirs characterization using neural network

    NASA Astrophysics Data System (ADS)

    Ouadfeul, Sid-Ali; Aliouane, Leila

    2014-05-01

    In this paper, a tentative of shale gas reservoirs characterization enhancement from well-logs data using neural network is established. The goal is to predict the Total Organic carbon (TOC) in boreholes where the TOC core rock or TOC well-log measurement does not exist. The Multilayer perceptron (MLP) neural network with three layers is established. The MLP input layer is constituted with five neurons corresponding to the Bulk density, Neutron porosity, sonic P wave slowness and photoelectric absorption coefficient. The hidden layer is forms with nine neurons and the output layer is formed with one neuron corresponding to the TOC log. Application to two boreholes located in Barnett shale formation where a well A is used as a pilot and a well B is used for propagation shows clearly the efficiency of the neural network method to improve the shale gas reservoirs characterization. The established formalism plays a high important role in the shale gas plays economy and long term gas energy production.

  8. File access prediction using neural networks.

    PubMed

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors. PMID:20421183

  9. Measure of Node Similarity in Multilayer Networks

    PubMed Central

    Mollgaard, Anders; Zettler, Ingo; Dammeyer, Jesper; Jensen, Mogens H.; Lehmann, Sune; Mathiesen, Joachim

    2016-01-01

    The weight of links in a network is often related to the similarity of the nodes. Here, we introduce a simple tunable measure for analysing the similarity of nodes across different link weights. In particular, we use the measure to analyze homophily in a group of 659 freshman students at a large university. Our analysis is based on data obtained using smartphones equipped with custom data collection software, complemented by questionnaire-based data. The network of social contacts is represented as a weighted multilayer network constructed from different channels of telecommunication as well as data on face-to-face contacts. We find that even strongly connected individuals are not more similar with respect to basic personality traits than randomly chosen pairs of individuals. In contrast, several socio-demographics variables have a significant degree of similarity. We further observe that similarity might be present in one layer of the multilayer network and simultaneously be absent in the other layers. For a variable such as gender, our measure reveals a transition from similarity between nodes connected with links of relatively low weight to dis-similarity for the nodes connected by the strongest links. We finally analyze the overlap between layers in the network for different levels of acquaintanceships. PMID:27300084

  10. Measure of Node Similarity in Multilayer Networks.

    PubMed

    Mollgaard, Anders; Zettler, Ingo; Dammeyer, Jesper; Jensen, Mogens H; Lehmann, Sune; Mathiesen, Joachim

    2016-01-01

    The weight of links in a network is often related to the similarity of the nodes. Here, we introduce a simple tunable measure for analysing the similarity of nodes across different link weights. In particular, we use the measure to analyze homophily in a group of 659 freshman students at a large university. Our analysis is based on data obtained using smartphones equipped with custom data collection software, complemented by questionnaire-based data. The network of social contacts is represented as a weighted multilayer network constructed from different channels of telecommunication as well as data on face-to-face contacts. We find that even strongly connected individuals are not more similar with respect to basic personality traits than randomly chosen pairs of individuals. In contrast, several socio-demographics variables have a significant degree of similarity. We further observe that similarity might be present in one layer of the multilayer network and simultaneously be absent in the other layers. For a variable such as gender, our measure reveals a transition from similarity between nodes connected with links of relatively low weight to dis-similarity for the nodes connected by the strongest links. We finally analyze the overlap between layers in the network for different levels of acquaintanceships. PMID:27300084

  11. The use of artificial neural networks for residential buildings conceptual cost estimation

    NASA Astrophysics Data System (ADS)

    Juszczyk, Michał

    2013-10-01

    Accurate cost estimation in the early phase of the building's design process is of key importance for a project's success. Both underestimation and overestimation may lead to projects failure in terms of costs. The paper presents synthetically some research results on the use of neural networks for conceptual cost estimation of residential buildings. In the course of the research the author focused on regression models binding together the basic information about residential buildings available in the early stage of design and construction cost. Application of different neural networks types was analysed (multilayer perceptron, multilayer perceptron with data compression based on principal component analysis and radial basis function networks). Due to the research results, multilayer perceptron networks proved to be the best neural network type for the problem solution. The research results indicate that a neural approach may be an interesting alternative for the traditional methods of conceptual cost estimation in construction projects.

  12. Neural network models for a resource allocation problem.

    PubMed

    Walczak, S

    1998-01-01

    University admissions and business personnel offices use a limited number of resources to process an ever-increasing quantity of student and employment applications. Application systems are further constrained to identify and acquire, in a limited time period, those candidates who are most likely to accept an offer of enrolment or employment. Neural networks are a new methodology to this particular domain. Various neural network architectures and learning algorithms are analyzed comparatively to determine the applicability of supervised learning neural networks to the domain problem of personnel resource allocation and to identify optimal learning strategies in this domain. This paper focuses on multilayer perceptron backpropagation, radial basis function, counterpropagation, general regression, fuzzy ARTMAP, and linear vector quantization neural networks. Each neural network predicts the probability of enrolment and nonenrolment for individual student applicants. Backpropagation networks produced the best overall performance. Network performance results are measured by the reduction in counsellors student case load and corresponding increases in student enrolment. The backpropagation neural networks achieve a 56% reduction in counsellor case load. PMID:18255946

  13. Competitive epidemic spreading over arbitrary multilayer networks

    NASA Astrophysics Data System (ADS)

    Darabi Sahneh, Faryad; Scoglio, Caterina

    2014-06-01

    This study extends the Susceptible-Infected-Susceptible (SIS) epidemic model for single-virus propagation over an arbitrary graph to an Susceptible-Infected by virus 1-Susceptible-Infected by virus 2-Susceptible (SI1SI2S) epidemic model of two exclusive, competitive viruses over a two-layer network with generic structure, where network layers represent the distinct transmission routes of the viruses. We find analytical expressions determining extinction, coexistence, and absolute dominance of the viruses after we introduce the concepts of survival threshold and absolute-dominance threshold. The main outcome of our analysis is the discovery and proof of a region for long-term coexistence of competitive viruses in nontrivial multilayer networks. We show coexistence is impossible if network layers are identical yet possible if network layers are distinct. Not only do we rigorously prove a region of coexistence, but we can quantitate it via interrelation of central nodes across the network layers. Little to no overlapping of the layers' central nodes is the key determinant of coexistence. For example, we show both analytically and numerically that positive correlation of network layers makes it difficult for a virus to survive, while in a network with negatively correlated layers, survival is easier, but total removal of the other virus is more difficult.

  14. Multilayer Network Analysis of Nuclear Reactions.

    PubMed

    Zhu, Liang; Ma, Yu-Gang; Chen, Qu; Han, Ding-Ding

    2016-01-01

    The nuclear reaction network is usually studied via precise calculation of differential equation sets, and much research interest has been focused on the characteristics of nuclides, such as half-life and size limit. In this paper, however, we adopt the methods from both multilayer and reaction networks, and obtain a distinctive view by mapping all the nuclear reactions in JINA REACLIB database into a directed network with 4 layers: neutron, proton, (4)He and the remainder. The layer names correspond to reaction types decided by the currency particles consumed. This combined approach reveals that, in the remainder layer, the β-stability has high correlation with node degree difference and overlapping coefficient. Moreover, when reaction rates are considered as node strength, we find that, at lower temperatures, nuclide half-life scales reciprocally with its out-strength. The connection between physical properties and topological characteristics may help to explore the boundary of the nuclide chart. PMID:27558995

  15. Multilayer Network Analysis of Nuclear Reactions

    PubMed Central

    Zhu, Liang; Ma, Yu-Gang; Chen, Qu; Han, Ding-Ding

    2016-01-01

    The nuclear reaction network is usually studied via precise calculation of differential equation sets, and much research interest has been focused on the characteristics of nuclides, such as half-life and size limit. In this paper, however, we adopt the methods from both multilayer and reaction networks, and obtain a distinctive view by mapping all the nuclear reactions in JINA REACLIB database into a directed network with 4 layers: neutron, proton, 4He and the remainder. The layer names correspond to reaction types decided by the currency particles consumed. This combined approach reveals that, in the remainder layer, the β-stability has high correlation with node degree difference and overlapping coefficient. Moreover, when reaction rates are considered as node strength, we find that, at lower temperatures, nuclide half-life scales reciprocally with its out-strength. The connection between physical properties and topological characteristics may help to explore the boundary of the nuclide chart. PMID:27558995

  16. The Effect of Network Parameters on Pi-Sigma Neural Network for Temperature Forecasting

    NASA Astrophysics Data System (ADS)

    Husaini, Noor Aida; Ghazali, Rozaida; Nawi, Nazri Mohd; Ismail, Lokman Hakim

    In this paper, we present the effect of network parameters to forecast temperature of a suburban area in Batu Pahat, Johor. The common ways of predicting the temperature using Neural Network has been applied for most meteorological parameters. However, researchers frequently neglected the network parameters which might affect the Neural Network's performance. Therefore, this study tends to explore the effect of network parameters by using Pi Sigma Neural Network (PSNN) with backpropagation algorithm. The network's performance is evaluated using the historical dataset of temperature in Batu Pahat for one step-ahead and benchmarked against Multilayer Perceptron (MLP) for comparison. We found out that, network parameters have significantly affected the performance of PSNN for temperature forecasting. Towards the end of this paper, we concluded the best forecasting model to predict the temperature based on the comparison of our study.

  17. Automatic Analysis of Radio Meteor Events Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Roman, Victor Ştefan; Buiu, Cătălin

    2015-12-01

    Meteor Scanning Algorithms (MESCAL) is a software application for automatic meteor detection from radio recordings, which uses self-organizing maps and feedforward multi-layered perceptrons. This paper aims to present the theoretical concepts behind this application and the main features of MESCAL, showcasing how radio recordings are handled, prepared for analysis, and used to train the aforementioned neural networks. The neural networks trained using MESCAL allow for valuable detection results, such as high correct detection rates and low false-positive rates, and at the same time offer new possibilities for improving the results.

  18. Automatic Analysis of Radio Meteor Events Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Roman, Victor Ştefan; Buiu, Cătălin

    2015-07-01

    Meteor Scanning Algorithms (MESCAL) is a software application for automatic meteor detection from radio recordings, which uses self-organizing maps and feedforward multi-layered perceptrons. This paper aims to present the theoretical concepts behind this application and the main features of MESCAL, showcasing how radio recordings are handled, prepared for analysis, and used to train the aforementioned neural networks. The neural networks trained using MESCAL allow for valuable detection results, such as high correct detection rates and low false-positive rates, and at the same time offer new possibilities for improving the results.

  19. Multiprocessor Neural Network in Healthcare.

    PubMed

    Godó, Zoltán Attila; Kiss, Gábor; Kocsis, Dénes

    2015-01-01

    A possible way of creating a multiprocessor artificial neural network is by the use of microcontrollers. The RISC processors' high performance and the large number of I/O ports mean they are greatly suitable for creating such a system. During our research, we wanted to see if it is possible to efficiently create interaction between the artifical neural network and the natural nervous system. To achieve as much analogy to the living nervous system as possible, we created a frequency-modulated analog connection between the units. Our system is connected to the living nervous system through 128 microelectrodes. Two-way communication is provided through A/D transformation, which is even capable of testing psychopharmacons. The microcontroller-based analog artificial neural network can play a great role in medical singal processing, such as ECG, EEG etc. PMID:26152990

  20. Neural network ultrasound image analysis

    NASA Astrophysics Data System (ADS)

    Schneider, Alexander C.; Brown, David G.; Pastel, Mary S.

    1993-09-01

    Neural network based analysis of ultrasound image data was carried out on liver scans of normal subjects and those diagnosed with diffuse liver disease. In a previous study, ultrasound images from a group of normal volunteers, Gaucher's disease patients, and hepatitis patients were obtained by Garra et al., who used classical statistical methods to distinguish from among these three classes. In the present work, neural network classifiers were employed with the same image features found useful in the previous study for this task. Both standard backpropagation neural networks and a recently developed biologically-inspired network called Dystal were used. Classification performance as measured by the area under a receiver operating characteristic curve was generally excellent for the back propagation networks and was roughly comparable to that of classical statistical discriminators tested on the same data set and documented in the earlier study. Performance of the Dystal network was significantly inferior; however, this may be due to the choice of network parameter. Potential methods for enhancing network performance was identified.

  1. Long-term multilayer adherent network (MAN) expansion, maintenance, and characterization, chemical and genetic manipulation, and transplantation of human fetal forebrain neural stem cells.

    PubMed

    Wakeman, Dustin R; Hofmann, Martin R; Redmond, D Eugene; Teng, Yang D; Snyder, Evan Y

    2009-05-01

    Human neural stem/precursor cells (hNSC/hNPC) have been targeted for application in a variety of research models and as prospective candidates for cell-based therapeutic modalities in central nervous system (CNS) disorders. To this end, the successful derivation, expansion, and sustained maintenance of undifferentiated hNSC/hNPC in vitro, as artificial expandable neurogenic micro-niches, promises a diversity of applications as well as future potential for a variety of experimental paradigms modeling early human neurogenesis, neuronal migration, and neurogenetic disorders, and could also serve as a platform for small-molecule drug screening in the CNS. Furthermore, hNPC transplants provide an alternative substrate for cellular regeneration and restoration of damaged tissue in neurodegenerative disorders such as Parkinson's disease and Alzheimer's disease. Human somatic neural stem/progenitor cells (NSC/NPC) have been derived from a variety of cadaveric sources and proven engraftable in a cytoarchitecturally appropriate manner into the developing and adult rodent and monkey brain while maintaining both functional and migratory capabilities in pathological models of disease. In the following unit, we describe a new procedure that we have successfully employed to maintain operationally defined human somatic NSC/NPC from developing fetal, pre-term post-natal, and adult cadaveric forebrain. Specifically, we outline the detailed methodology for in vitro expansion, long-term maintenance, manipulation, and transplantation of these multipotent precursors. PMID:19455542

  2. Plant Growth Models Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  3. Centroid calculation using neural networks

    NASA Astrophysics Data System (ADS)

    Himes, Glenn S.; Inigo, Rafael M.

    1992-01-01

    Centroid calculation provides a means of eliminating translation problems, which is useful for automatic target recognition. a neural network implementation of centroid calculation is described that used a spatial filter and a Hopfield network to determine the centroid location of an object. spatial filtering of a segmented window creates a result whose peak vale occurs at the centroid of the input data set. A Hopfield network then finds the location of this peak and hence gives the location of the centroid. Hardware implementations of the networks are described and simulation results are provided.

  4. Neural Networks for Flight Control

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  5. Using Hybrid Algorithm to Improve Intrusion Detection in Multi Layer Feed Forward Neural Networks

    ERIC Educational Resources Information Center

    Ray, Loye Lynn

    2014-01-01

    The need for detecting malicious behavior on a computer networks continued to be important to maintaining a safe and secure environment. The purpose of this study was to determine the relationship of multilayer feed forward neural network architecture to the ability of detecting abnormal behavior in networks. This involved building, training, and…

  6. Neural networks and applications tutorial

    NASA Astrophysics Data System (ADS)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  7. Artificial neural networks in medicine

    SciTech Connect

    Keller, P.E.

    1994-07-01

    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  8. Neural networks for handwriting recognition

    NASA Astrophysics Data System (ADS)

    Kelly, David A.

    1992-09-01

    The market for a product that can read handwritten forms, such as insurance applications, re- order forms, or checks, is enormous. Companies could save millions of dollars each year if they had an effective and efficient way to read handwritten forms into a computer without human intervention. Urged on by the potential gold mine that an adequate solution would yield, a number of companies and researchers have developed, and are developing, neural network-based solutions to this long-standing problem. This paper briefly outlines the current state-of-the-art in neural network-based handwriting recognition research and products. The first section of the paper examines the potential market for this technology. The next section outlines the steps in the recognition process, followed by a number of the basic issues that need to be dealt with to solve the recognition problem in a real-world setting. Next, an overview of current commercial solutions and research projects shows the different ways that neural networks are applied to the problem. This is followed by a breakdown of the current commercial market and the future outlook for neural network-based handwriting recognition technology.

  9. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  10. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  11. Parameter incremental learning algorithm for neural networks.

    PubMed

    Wan, Sheng; Banta, Larry E

    2006-11-01

    In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658

  12. Prospecting droughts with stochastic artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ochoa-Rivera, Juan Camilo

    2008-04-01

    SummaryA non-linear multivariate model based on an artificial neural network multilayer perceptron is presented, that includes a random component. The developed model is applied to generate monthly streamflows, which are used to obtain synthetic annual droughts. The calibration of the model was undertaken using monthly streamflow records of several geographical sites of a basin. The model calibration consisted of training the neural network with the error back-propagation learning algorithm, and adding a normally distributed random noise. The model was validated by comparing relevant statistics of synthetic streamflow series to those of historical records. Annual droughts were calculated from the generated streamflow series, and then the expected values of length, intensity and magnitude of the droughts were assessed. An exercise on identical basis was made applying a second order auto-regressive multivariate model, AR(2), to compare its results with those of the developed model. The proposed model outperforms the AR(2) model in reproducing the future drought scenarios.

  13. Neural networks: A versatile tool from artificial intelligence

    SciTech Connect

    Yama, B.R.; Lineberry, G.T.

    1996-12-31

    Artificial Intelligence research has produced several tools for commercial application in recent years. Artificial Neural Networks (ANNs), Fuzzy Logic, and Expert Systems are some of the techniques that are widely used today in various fields of engineering and business. Among these techniques, ANNs are gaining popularity due to their learning and other brain-like capabilities. Within the mining industry, ANN technology is being utilized with large payoffs for real-time process control applications. In this paper, a brief introduction to ANNs and the associated terminology is given. The neural network development process is outlined, followed by the back-propagation learning algorithm. Next, the development of two multi-layer, feed-forward neural networks is described and the results axe presented. One network is developed for prediction of strength of intact rock specimens, and another network is developed for prediction of mineral concentrations. Preliminary results indicate a predictive error less than 10% using cross-validation on a limited data set. The performance of the neural network for prediction of mineral concentrations was compared with kriging. It was found that the neural network performed not only satisfactorily, but in some cases performed better than, the kriging model.

  14. On degree-degree correlations in multilayer networks

    NASA Astrophysics Data System (ADS)

    de Arruda, Guilherme Ferraz; Cozzo, Emanuele; Moreno, Yamir; Rodrigues, Francisco A.

    2016-06-01

    We propose a generalization of the concept of assortativity based on the tensorial representation of multilayer networks, covering the definitions given in terms of Pearson and Spearman coefficients. Our approach can also be applied to weighted networks and provides information about correlations considering pairs of layers. By analyzing the multilayer representation of the airport transportation network, we show that contrasting results are obtained when the layers are analyzed independently or as an interconnected system. Finally, we study the impact of the level of assortativity and heterogeneity between layers on the spreading of diseases. Our results highlight the need of studying degree-degree correlations on multilayer systems, instead of on aggregated networks.

  15. A spiking neural network architecture for nonlinear function approximation.

    PubMed

    Iannella, N; Back, A D

    2001-01-01

    Multilayer perceptrons have received much attention in recent years due to their universal approximation capabilities. Normally, such models use real valued continuous signals, although they are loosely based on biological neuronal networks that encode signals using spike trains. Spiking neural networks are of interest both from a biological point of view and in terms of a method of robust signaling in particularly noisy or difficult environments. It is important to consider networks based on spike trains. A basic question that needs to be considered however, is what type of architecture can be used to provide universal function approximation capabilities in spiking networks? In this paper, we propose a spiking neural network architecture using both integrate-and-fire units as well as delays, that is capable of approximating a real valued function mapping to within a specified degree of accuracy. PMID:11665783

  16. Overview of artificial neural networks.

    PubMed

    Zou, Jinming; Han, Yi; So, Sung-Sau

    2008-01-01

    The artificial neural network (ANN), or simply neural network, is a machine learning method evolved from the idea of simulating the human brain. The data explosion in modem drug discovery research requires sophisticated analysis methods to uncover the hidden causal relationships between single or multiple responses and a large set of properties. The ANN is one of many versatile tools to meet the demand in drug discovery modeling. Compared to a traditional regression approach, the ANN is capable of modeling complex nonlinear relationships. The ANN also has excellent fault tolerance and is fast and highly scalable with parallel processing. This chapter introduces the background of ANN development and outlines the basic concepts crucially important for understanding more sophisticated ANN. Several commonly used learning methods and network setups are discussed briefly at the end of the chapter. PMID:19065803

  17. Neural Networks For Visual Telephony

    NASA Astrophysics Data System (ADS)

    Gottlieb, A. M.; Alspector, J.; Huang, P.; Hsing, T. R.

    1988-10-01

    By considering how an image is processed by the eye and brain, we may find ways to simplify the task of transmitting complex video images over a telecommunication channel. Just as the retina and visual cortex reduce the amount of information sent to other areas of the brain, electronic systems can be designed to compress visual data, encode features, and adapt to new scenes for video transmission. In this talk, we describe a system inspired by models of neural computation that may, in the future, augment standard digital processing techniques for image compression. In the next few years it is expected that a compact low-cost full motion video telephone operating over an ISDN basic access line (144 KBits/sec) will be shown to be feasible. These systems will likely be based on a standard digital signal processing approach. In this talk, we discuss an alternative method that does not use standard digital signal processing but instead uses eletronic neural networks to realize the large compression necessary for a low bit-rate video telephone. This neural network approach is not being advocated as a near term solution for visual telephony. However, low bit rate visual telephony is an area where neural network technology may, in the future, find a significant application.

  18. Syntactic neural network for character recognition

    NASA Astrophysics Data System (ADS)

    Jaravine, Viktor A.

    1992-08-01

    This article presents a synergism of syntactic 2-D parsing of images and multilayered, feed- forward network techniques. This approach makes it possible to build a written text reading system with absolute recognition rate for unambiguous text strings. The Syntactic Neural Network (SNN) is created during image parsing process by capturing the higher order statistical structure in the ensemble of input image examples. Acquired knowledge is stored in the form of hierarchical image elements dictionary and syntactic network. The number of hidden layers and neuron units is not fixed and is determined by the structural complexity of the teaching set. A proposed syntactic neuron differs from conventional numerical neuron by its symbolic input/output and usage of the dictionary for determining the output. This approach guarantees exact recognition of an image that is a combinatorial variation of the images from the training set. The system is taught to generalize and to make stochastic parsing of distorted and shifted patterns. The generalizations enables the system to perform continuous incremental optimization of its work. New image data learned by SNN doesn''t interfere with previously stored knowledge, thus leading to unlimited storage capacity of the network.

  19. Reducing the dimensionality of data with neural networks.

    PubMed

    Hinton, G E; Salakhutdinov, R R

    2006-07-28

    High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data. PMID:16873662

  20. An optimization methodology for neural network weights and architectures.

    PubMed

    Ludermir, Teresa B; Yamazaki, Akio; Zanchettin, Cleber

    2006-11-01

    This paper introduces a methodology for neural network global optimization. The aim is the simultaneous optimization of multilayer perceptron (MLP) network weights and architectures, in order to generate topologies with few connections and high classification performance for any data sets. The approach combines the advantages of simulated annealing, tabu search and the backpropagation training algorithm in order to generate an automatic process for producing networks with high classification performance and low complexity. Experimental results obtained with four classification problems and one prediction problem has shown to be better than those obtained by the most commonly used optimization techniques. PMID:17131660

  1. Validation and regulation of medical neural networks.

    PubMed

    Rodvold, D M

    2001-01-01

    Using artificial neural networks (ANNs) in medical applications can be challenging because of the often-experimental nature of ANN construction and the "black box" label that is frequently attached to them. In the US, medical neural networks are regulated by the Food and Drug Administration. This article briefly discusses the documented FDA policy on neural networks and the various levels of formal acceptance that neural network development groups might pursue. To assist medical neural network developers in creating robust and verifiable software, this paper provides a development process model targeted specifically to ANNs for critical applications. PMID:11790274

  2. Ranking in interconnected multilayer networks reveals versatile nodes

    NASA Astrophysics Data System (ADS)

    de Domenico, Manlio; Solé-Ribalta, Albert; Omodei, Elisa; Gómez, Sergio; Arenas, Alex

    2015-04-01

    The determination of the most central agents in complex networks is important because they are responsible for a faster propagation of information, epidemics, failures and congestion, among others. A challenging problem is to identify them in networked systems characterized by different types of interactions, forming interconnected multilayer networks. Here we describe a mathematical framework that allows us to calculate centrality in such networks and rank nodes accordingly, finding the ones that play the most central roles in the cohesion of the whole structure, bridging together different types of relations. These nodes are the most versatile in the multilayer network. We investigate empirical interconnected multilayer networks and show that the approaches based on aggregating--or neglecting--the multilayer structure lead to a wrong identification of the most versatile nodes, overestimating the importance of more marginal agents and demonstrating the power of versatility in predicting their role in diffusive and congestion processes.

  3. Standard cell-based implementation of a digital optoelectronic neural-network hardware.

    PubMed

    Maier, K D; Beckstein, C; Blickhan, R; Erhard, W

    2001-03-10

    A standard cell-based implementation of a digital optoelectronic neural-network architecture is presented. The overall structure of the multilayer perceptron network that was used, the optoelectronic interconnection system between the layers, and all components required in each layer are defined. The design process from VHDL-based modeling from synthesis and partly automatic placing and routing to the final editing of one layer of the circuit of the multilayer perceptrons are described. A suitable approach for the standard cell-based design of optoelectronic systems is presented, and shortcomings of the design tool that was used are pointed out. The layout for the microelectronic circuit of one layer in a multilayer perceptron neural network with a performance potential 1 magnitude higher than neural networks that are purely electronic based has been successfully designed. PMID:18357111

  4. Standard Cell-Based Implementation of a Digital Optoelectronic Neural-Network Hardware

    NASA Astrophysics Data System (ADS)

    Maier, Klaus D.; Beckstein, Clemens; Blickhan, Reinhard; Erhard, Werner

    2001-03-01

    A standard cell-based implementation of a digital optoelectronic neural-network architecture is presented. The overall structure of the multilayer perceptron network that was used, the optoelectronic interconnection system between the layers, and all components required in each layer are defined. The design process from VHDL-based modeling from synthesis and partly automatic placing and routing to the final editing of one layer of the circuit of the multilayer perceptrons are described. A suitable approach for the standard cell-based design of optoelectronic systems is presented, and shortcomings of the design tool that was used are pointed out. The layout for the microelectronic circuit of one layer in a multilayer perceptron neural network with a performance potential 1 magnitude higher than neural networks that are purely electronic based has been successfully designed.

  5. A neural networks study of quinone compounds with trypanocidal activity.

    PubMed

    de Molfetta, Fábio Alberto; Angelotti, Wagner Fernando Delfino; Romero, Roseli Aparecida Francelin; Montanari, Carlos Alberto; da Silva, Albérico Borges Ferreira

    2008-10-01

    This work investigates neural network models for predicting the trypanocidal activity of 28 quinone compounds. Artificial neural networks (ANN), such as multilayer perceptrons (MLP) and Kohonen models, were employed with the aim of modeling the nonlinear relationship between quantum and molecular descriptors and trypanocidal activity. The calculated descriptors and the principal components were used as input to train neural network models to verify the behavior of the nets. The best model for both network models (MLP and Kohonen) was obtained with four descriptors as input. The descriptors were T5 (torsion angle), QTS1 (sum of absolute values of the atomic charges), VOLS2 (volume of the substituent at region B) and HOMO-1 (energy of the molecular orbital below HOMO). These descriptors provide information on the kind of interaction that occurs between the compounds and the biological receptor. Both neural network models used here can predict the trypanocidal activity of the quinone compounds with good agreement, with low errors in the testing set and a high correctness rate. Thanks to the nonlinear model obtained from the neural network models, we can conclude that electronic and structural properties are important factors in the interaction between quinone compounds that exhibit trypanocidal activity and their biological receptors. The final ANN models should be useful in the design of novel trypanocidal quinones having improved potency. PMID:18629551

  6. The design and analysis of effective and efficient neural networks and their applications

    SciTech Connect

    Makovoz, W.V.

    1989-01-01

    A complicated design issue of efficient Multilayer neural networks is addressed, and the perception and similar neural networks are examined. It shows that a three-layer perceptron neural network with specially designed learning algorithms provides an efficient framework to solve an exclusive OR problem using only n {minus} 1 processing elements in the second layer. Two efficient rapidly converging algorithms for any symmetric Boolean function were developed using only n {minus} 1 processing elements in the perceptron neural network and int(n/2) processing elements in the Adaline and perceptron neural network with the stepfunction transfer function. Similar results were obtained for the quasi-symmetric Boolean functions using a linear number of processing elements in perceptron neural networks, Adaline's, and perceptron neural networks with the stepfunction transfer functions. Generalized Boolean functions are discussed and two rapidly converging algorithms are shown for perceptron neural networks, Adaline's, and perceptron neural network with stepfunction transfer function. Many other interesting perceptron neural networks are discussed in the dissertation. Perceptron neural networks are applied to find the largest value of the n inputs. A new perceptron neural network is designed to find the largest value of the n inputs with the minimum number of inputs and the minimum number of layers. New perceptron neural networks are developed to sort n inputs. New, effective and efficient back-propagation Neural networks are designed to sort n inputs. The Sigmoid transfer function was discussed and a generalized Sigmoid function to improve Neural network performance was developed. A modified back-propagation learning algorithm was developed that builds any n input symmetric Boolean function using only int(n/2) processing elements in the second layer.

  7. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  8. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    PubMed Central

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  9. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    PubMed

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  10. The hysteretic Hopfield neural network.

    PubMed

    Bharitkar, S; Mendel, J M

    2000-01-01

    A new neuron activation function based on a property found in physical systems--hysteresis--is proposed. We incorporate this neuron activation in a fully connected dynamical system to form the hysteretic Hopfield neural network (HHNN). We then present an analog implementation of this architecture and its associated dynamical equation and energy function.We proceed to prove Lyapunov stability for this new model, and then solve a combinatorial optimization problem (i.e., the N-queen problem) using this network. We demonstrate the advantages of hysteresis by showing increased frequency of convergence to a solution, when the parameters associated with the activation function are varied. PMID:18249816

  11. The LILARTI neural network system

    SciTech Connect

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  12. Epidemic Model with Isolation in Multilayer Networks

    NASA Astrophysics Data System (ADS)

    Zuzek, L. G. Alvarez; Stanley, H. E.; Braunstein, L. A.

    2015-07-01

    The Susceptible-Infected-Recovered (SIR) model has successfully mimicked the propagation of such airborne diseases as influenza A (H1N1). Although the SIR model has recently been studied in a multilayer networks configuration, in almost all the research the isolation of infected individuals is disregarded. Hence we focus our study in an epidemic model in a two-layer network, and we use an isolation parameter w to measure the effect of quarantining infected individuals from both layers during an isolation period tw. We call this process the Susceptible-Infected-Isolated-Recovered (SIIR) model. Using the framework of link percolation we find that isolation increases the critical epidemic threshold of the disease because the time in which infection can spread is reduced. In this scenario we find that this threshold increases with w and tw. When the isolation period is maximum there is a critical threshold for w above which the disease never becomes an epidemic. We simulate the process and find an excellent agreement with the theoretical results.

  13. Epidemic Model with Isolation in Multilayer Networks

    PubMed Central

    Zuzek, L. G. Alvarez; Stanley, H. E.; Braunstein, L. A.

    2015-01-01

    The Susceptible-Infected-Recovered (SIR) model has successfully mimicked the propagation of such airborne diseases as influenza A (H1N1). Although the SIR model has recently been studied in a multilayer networks configuration, in almost all the research the isolation of infected individuals is disregarded. Hence we focus our study in an epidemic model in a two-layer network, and we use an isolation parameter w to measure the effect of quarantining infected individuals from both layers during an isolation period tw. We call this process the Susceptible-Infected-Isolated-Recovered (SIIR) model. Using the framework of link percolation we find that isolation increases the critical epidemic threshold of the disease because the time in which infection can spread is reduced. In this scenario we find that this threshold increases with w and tw. When the isolation period is maximum there is a critical threshold for w above which the disease never becomes an epidemic. We simulate the process and find an excellent agreement with the theoretical results. PMID:26173897

  14. Load forecasting using artificial neural networks

    SciTech Connect

    Pham, K.D.

    1995-12-31

    Artificial neural networks, modeled after their biological counterpart, have been successfully applied in many diverse areas including speech and pattern recognition, remote sensing, electrical power engineering, robotics and stock market forecasting. The most commonly used neural networks are those that gained knowledge from experience. Experience is presented to the network in form of the training data. Once trained, the neural network can recognized data that it has not seen before. This paper will present a fundamental introduction to the manner in which neural networks work and how to use them in load forecasting.

  15. Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks

    PubMed Central

    Maca, Petr; Pech, Pavel

    2016-01-01

    The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948–2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons. PMID:26880875

  16. Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks.

    PubMed

    Maca, Petr; Pech, Pavel

    2016-01-01

    The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948-2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons. PMID:26880875

  17. Neural network modeling of emotion

    NASA Astrophysics Data System (ADS)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  18. Neural networks for aircraft system identification

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.

    1991-01-01

    Artificial neural networks offer some interesting possibilities for use in control. Our current research is on the use of neural networks on an aircraft model. The model can then be used in a nonlinear control scheme. The effectiveness of network training is demonstrated.

  19. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  20. Neural networks and MIMD-multiprocessors

    NASA Technical Reports Server (NTRS)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  1. Neural Networks in Nonlinear Aircraft Control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.

    1990-01-01

    Recent research indicates that artificial neural networks offer interesting learning or adaptive capabilities. The current research focuses on the potential for application of neural networks in a nonlinear aircraft control law. The current work has been to determine which networks are suitable for such an application and how they will fit into a nonlinear control law.

  2. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  3. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  4. Practical application of artificial neural networks in the neurosciences

    NASA Astrophysics Data System (ADS)

    Pinti, Antonio

    1995-04-01

    This article presents a practical application of artificial multi-layer perceptron (MLP) neural networks in neurosciences. The data that are processed are labeled data from the visual analysis of electrical signals of human sleep. The objective of this work is to automatically classify into sleep stages the electrophysiological signals recorded from electrodes placed on a sleeping patient. Two large data bases were designed by experts in order to realize this study. One data base was used to train the network and the other to test its generalization capacity. The classification results obtained with the MLP network were compared to a type K nearest neighbor Knn non-parametric classification method. The MLP network gave a better result in terms of classification than the Knn method. Both classification techniques were implemented on a transputer system. With both networks in their final configuration, the MLP network was 160 times faster than the Knn model in classifying a sleep period.

  5. Real-time EFIT data reconstruction based on neural network in KSTAR

    NASA Astrophysics Data System (ADS)

    Kwak, Sehyun; Jeon, Youngmu; Ghim, Young-Chul

    2014-10-01

    Real-time EFIT data can be obtained using a neural network method. A non-linear mapping between diagnostic signals and shaping parameters of plasma equilibrium can be established by the neural network, particularly with the multilayer perceptron. The neural network is utilized to attain real-time EFIT data for Korea Superconducting Tokamak for Advanced Research (KSTAR). We collect and process existing datasets of measured data and EFIT data to train and test the neural network. Parameter scans such as the numbers of hidden layers and hidden units were performed in order to find the optimal condition. EFIT data from the neural network was compared with both offline EFIT and real-time EFIT data. Finally, we discuss advantages of using neutral network reconstructed EFIT data for real time plasma control.

  6. Diagnosis of hepatitis by use of neural network learning

    NASA Astrophysics Data System (ADS)

    Fan, Hong-Qing; Zhang, Qy-zi

    1994-03-01

    An attempt is made to find a new way for better diagnosis of hepatisis through application of artificial neural network theory. Learning from a given sample set, the neural network is used to establish a nonlinear mapping between various factors, such as symptoms, signs, and laboratorial experiments, and diagnosis of hepatisis. It is proved that the used network and values of weight after learning are available to the identification of equivalent class of a new pattern of hepatisis. In this paper, the knowledge learning and learning algorithms used in diagnosis are mainly discussed, an optimal generalization algorithm based on the error decrease algorithm and used to train multilayer feedforward is presented; meanwhile, the application results and their effectiveness are introduced.

  7. Inversion of parameters for semiarid regions by a neural network

    NASA Technical Reports Server (NTRS)

    Zurk, Lisa M.; Davis, Daniel; Njoku, Eni G.; Tsang, Leung; Hwang, Jenq-Neng

    1992-01-01

    Microwave brightness temperatures obtained from a passive radiative transfer model are inverted through use of a neural network. The model is applicable to semiarid regions and produces dual-polarized brightness temperatures for 6.6-, 10.7-, and 37-GHz frequencies. A range of temperatures is generated by varying three geophysical parameters over acceptable ranges: soil moisture, vegetation moisture, and soil temperature. A multilayered perceptron (MLP) neural network is trained with a subset of the generated temperatures, and the remaining temperatures are inverted using a backpropagation method. Several synthetic terrains are devised and inverted by the network under local constraints. All the inversions show good agreement with the original geophysical parameters, falling within 5 percent of the actual value of the parameter range.

  8. Adaptive optimization and control using neural networks

    SciTech Connect

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  9. Noise-robust realization of Turing-complete cellular automata by using neural networks with pattern representation

    NASA Astrophysics Data System (ADS)

    Oku, Makito; Aihara, Kazuyuki

    2010-11-01

    A modularly-structured neural network model is considered. Each module, which we call a ‘cell’, consists of two parts: a Hopfield neural network model and a multilayered perceptron. An array of such cells is used to simulate the Rule 110 cellular automaton with high accuracy even when all the units of neural networks are replaced by stochastic binary ones. We also find that noise not only degrades but also facilitates computation if the outputs of multilayered perceptrons are below the threshold required to update the states of the cells, which is a stochastic resonance in computation.

  10. Complexity matching in neural networks

    NASA Astrophysics Data System (ADS)

    Usefie Mafahim, Javad; Lambert, David; Zare, Marzieh; Grigolini, Paolo

    2015-01-01

    In the wide literature on the brain and neural network dynamics the notion of criticality is being adopted by an increasing number of researchers, with no general agreement on its theoretical definition, but with consensus that criticality makes the brain very sensitive to external stimuli. We adopt the complexity matching principle that the maximal efficiency of communication between two complex networks is realized when both of them are at criticality. We use this principle to establish the value of the neuronal interaction strength at which criticality occurs, yielding a perfect agreement with the adoption of temporal complexity as criticality indicator. The emergence of a scale-free distribution of avalanche size is proved to occur in a supercritical regime. We use an integrate-and-fire model where the randomness of each neuron is only due to the random choice of a new initial condition after firing. The new model shares with that proposed by Izikevich the property of generating excessive periodicity, and with it the annihilation of temporal complexity at supercritical values of the interaction strength. We find that the concentration of inhibitory links can be used as a control parameter and that for a sufficiently large concentration of inhibitory links criticality is recovered again. Finally, we show that the response of a neural network at criticality to a harmonic stimulus is very weak, in accordance with the complexity matching principle.

  11. Advances in neural networks research: an introduction.

    PubMed

    Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar

    2009-01-01

    The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications. PMID:19632811

  12. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  13. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  14. Neural network modeling of distillation columns

    SciTech Connect

    Baratti, R.; Vacca, G.; Servida, A.

    1995-06-01

    Neural network modeling (NNM) was implemented for monitoring and control applications on two actual distillation columns: the butane splitter tower and the gasoline stabilizer. The two distillation columns are in operation at the SARAS refinery. Results show that with proper implementation techniques NNM can significantly improve column operation. The common belief that neural networks can be used as black-box process models is not completely true. Effective implementation always requires a minimum degree of process knowledge to identify the relevant inputs to the net. After background and generalities on neural network modeling, the paper describes efforts on the development of neural networks for the two distillation units.

  15. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  16. Aerodynamic Design Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.

    2003-01-01

    The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.

  17. Cyclone track forecasting based on satellite images using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Kovordányi, Rita; Roy, Chandan

    Many places around the world are exposed to tropical cyclones and associated storm surges. In spite of massive efforts, a great number of people die each year as a result of cyclone events. To mitigate this damage, improved forecasting techniques must be developed. The technique presented here uses artificial neural networks to interpret NOAA-AVHRR satellite images. A multi-layer neural network, resembling the human visual system, was trained to forecast the movement of cyclones based on satellite images. The trained network produced correct directional forecast for 98% of test images, thus showing a good generalization capability. The results indicate that multi-layer neural networks could be further developed into an effective tool for cyclone track forecasting using various types of remote sensing data. Future work includes extension of the present network to handle a wide range of cyclones and to take into account supplementary information, such as wind speeds, water temperature, humidity, and air pressure.

  18. Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks

    SciTech Connect

    Ziaul Huque

    2007-08-31

    This is the final technical report for the project titled 'Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks'. The aim of the project was to develop an efficient chemistry model for combustion simulations. The reduced chemistry model was developed mathematically without the need of having extensive knowledge of the chemistry involved. To aid in the development of the model, Neural Networks (NN) was used via a new network topology known as Non-linear Principal Components Analysis (NPCA). A commonly used Multilayer Perceptron Neural Network (MLP-NN) was modified to implement NPCA-NN. The training rate of NPCA-NN was improved with the GEneralized Regression Neural Network (GRNN) based on kernel smoothing techniques. Kernel smoothing provides a simple way of finding structure in data set without the imposition of a parametric model. The trajectory data of the reaction mechanism was generated based on the optimization techniques of genetic algorithm (GA). The NPCA-NN algorithm was then used for the reduction of Dimethyl Ether (DME) mechanism. DME is a recently discovered fuel made from natural gas, (and other feedstock such as coal, biomass, and urban wastes) which can be used in compression ignition engines as a substitute for diesel. An in-house two-dimensional Computational Fluid Dynamics (CFD) code was developed based on Meshfree technique and time marching solution algorithm. The project also provided valuable research experience to two graduate students.

  19. Neural networks for nuclear spectroscopy

    SciTech Connect

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T.

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  20. Neural Network Classifies Teleoperation Data

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Giancaspro, Antonio; Losito, Sergio; Pasquariello, Guido

    1994-01-01

    Prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on manipulator hand. Prototype is early, subsystem-level product of continuing effort to develop automated system that assists in training and supervising human control operator: provides symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to operator in real time during successive executions of same task. Also simplifies transition between teleoperation and autonomous modes of telerobotic system.

  1. The Laplacian spectrum of neural networks

    PubMed Central

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  2. Ozone Modeling Using Neural Networks.

    NASA Astrophysics Data System (ADS)

    Narasimhan, Ramesh; Keller, Joleen; Subramaniam, Ganesh; Raasch, Eric; Croley, Brandon; Duncan, Kathleen; Potter, William T.

    2000-03-01

    Ozone models for the city of Tulsa were developed using neural network modeling techniques. The neural models were developed using meteorological data from the Oklahoma Mesonet and ozone, nitric oxide, and nitrogen dioxide (NO2) data from Environmental Protection Agency monitoring sites in the Tulsa area. An initial model trained with only eight surface meteorological input variables and NO2 was able to simulate ozone concentrations with a correlation coefficient of 0.77. The trained model was then used to evaluate the sensitivity to the primary variables that affect ozone concentrations. The most important variables (NO2, temperature, solar radiation, and relative humidity) showed response curves with strong nonlinear codependencies. Incorporation of ozone concentrations from the previous 3 days into the model increased the correlation coefficient to 0.82. As expected, the ozone concentrations correlated best with the most recent (1-day previous) values. The model's correlation coefficient was increased to 0.88 by the incorporation of upper-air data from the National Weather Service's Nested Grid Model. Sensitivity analysis for the upper-air variables indicated unusual positive correlations between ozone and the relative humidity from 500 hPa to the tropopause in addition to the other expected correlations with upper-air temperatures, vertical wind velocity, and 1000-500-hPa layer thickness. The neural model results are encouraging for the further use of these systems to evaluate complex parameter cosensitivities, and for the use of these systems in automated ozone forecast systems.

  3. Three dimensional living neural networks

    NASA Astrophysics Data System (ADS)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  4. A neural network architecture for implementation of expert systems for real time monitoring

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.

    1991-01-01

    Since neural networks have the advantages of massive parallelism and simple architecture, they are good tools for implementing real time expert systems. In a rule based expert system, the antecedents of rules are in the conjunctive or disjunctive form. We constructed a multilayer feedforward type network in which neurons represent AND or OR operations of rules. Further, we developed a translator which can automatically map a given rule base into the network. Also, we proposed a new and powerful yet flexible architecture that combines the advantages of both fuzzy expert systems and neural networks. This architecture uses the fuzzy logic concepts to separate input data domains into several smaller and overlapped regions. Rule-based expert systems for time critical applications using neural networks, the automated implementation of rule-based expert systems with neural nets, and fuzzy expert systems vs. neural nets are covered.

  5. Neural networks for automated classification of ionospheric irregularities in HF radar backscattered signals

    NASA Astrophysics Data System (ADS)

    Wing, S.; Greenwald, R. A.; Meng, C.-I.; Sigillito, V. G.; Hutton, L. V.

    2003-08-01

    The classification of high frequency (HF) radar backscattered signals from the ionospheric irregularities (clutters) into those suitable, or not, for further analysis, is a time-consuming task even by experts in the field. We tested several different feedforward neural networks on this task, investigating the effects of network type (single layer versus multilayer) and number of hidden nodes upon performance. As expected, the multilayer feedforward networks (MLFNs) outperformed the single-layer networks. The MLFNs achieved performance levels of 100% correct on the training set and up to 98% correct on the testing set. Comparable figures for the single-layer networks were 94.5% and 92%, respectively. When measures of sensitivity, specificity, and proportion of variance accounted for by the model are considered, the superiority of the MLFNs over the single-layer networks is much more striking. Our results suggest that such neural networks could aid many HF radar operations such as frequency search, space weather, etc.

  6. Comparative study of different wavelet based neural network models for rainfall-runoff modeling

    NASA Astrophysics Data System (ADS)

    Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.

    2014-07-01

    The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.

  7. Artificial neural networks in neurosurgery.

    PubMed

    Azimi, Parisa; Mohammadi, Hasan Reza; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad; Montazeri, Ali

    2015-03-01

    Artificial neural networks (ANNs) effectively analyze non-linear data sets. The aimed was A review of the relevant published articles that focused on the application of ANNs as a tool for assisting clinical decision-making in neurosurgery. A literature review of all full publications in English biomedical journals (1993-2013) was undertaken. The strategy included a combination of key words 'artificial neural networks', 'prognostic', 'brain', 'tumor tracking', 'head', 'tumor', 'spine', 'classification' and 'back pain' in the title and abstract of the manuscripts using the PubMed search engine. The major findings are summarized, with a focus on the application of ANNs for diagnostic and prognostic purposes. Finally, the future of ANNs in neurosurgery is explored. A total of 1093 citations were identified and screened. In all, 57 citations were found to be relevant. Of these, 50 articles were eligible for inclusion in this review. The synthesis of the data showed several applications of ANN in neurosurgery, including: (1) diagnosis and assessment of disease progression in low back pain, brain tumours and primary epilepsy; (2) enhancing clinically relevant information extraction from radiographic images, intracranial pressure processing, low back pain and real-time tumour tracking; (3) outcome prediction in epilepsy, brain metastases, lumbar spinal stenosis, lumbar disc herniation, childhood hydrocephalus, trauma mortality, and the occurrence of symptomatic cerebral vasospasm in patients with aneurysmal subarachnoid haemorrhage; (4) the use in the biomechanical assessments of spinal disease. ANNs can be effectively employed for diagnosis, prognosis and outcome prediction in neurosurgery. PMID:24987050

  8. Computational acceleration using neural networks

    NASA Astrophysics Data System (ADS)

    Cadaret, Paul

    2008-04-01

    The author's recent participation in the Small Business Innovative Research (SBIR) program has resulted in the development of a patent pending technology that enables the construction of very large and fast artificial neural networks. Through the use of UNICON's CogniMax pattern recognition technology we believe that systems can be constructed that exploit the power of "exhaustive learning" for the benefit of certain types of complex and slow computational problems. This paper presents a theoretical study that describes one potentially beneficial application of exhaustive learning. It describes how a very large and fast Radial Basis Function (RBF) artificial Neural Network (NN) can be used to implement a useful computational system. Viewed another way, it presents an unusual method of transforming a complex, always-precise, and slow computational problem into a fuzzy pattern recognition problem where other methods are available to effectively improve computational performance. The method described recognizes that the need for computational precision in a problem domain sometimes varies throughout the domain's Feature Space (FS) and high precision may only be needed in limited areas. These observations can then be exploited to the benefit of overall computational performance. Addressing computational reliability, we describe how existing always-precise computational methods can be used to reliably train the NN to perform the computational interpolation function. The author recognizes that the method described is not applicable to every situation, but over the last 8 months we have been surprised at how often this method can be applied to enable interesting and effective solutions.

  9. A new formulation for feedforward neural networks.

    PubMed

    Razavi, Saman; Tolson, Bryan A

    2011-10-01

    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization. PMID:21859600

  10. Drift chamber tracking with neural networks

    SciTech Connect

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  11. Coherence resonance in bursting neural networks

    NASA Astrophysics Data System (ADS)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  12. From Classical Neural Networks to Quantum Neural Networks

    NASA Astrophysics Data System (ADS)

    Tirozzi, B.

    2013-09-01

    First I give a brief description of the classical Hopfield model introducing the fundamental concepts of patterns, retrieval, pattern recognition, neural dynamics, capacity and describe the fundamental results obtained in this field by Amit, Gutfreund and Sompolinsky,1 using the non rigorous method of replica and the rigorous version given by Pastur, Shcherbina, Tirozzi2 using the cavity method. Then I give a formulation of the theory of Quantum Neural Networks (QNN) in terms of the XY model with Hebbian interaction. The problem of retrieval and storage is discussed. The retrieval states are the states of the minimum energy. I apply the estimates found by Lieb3 which give lower and upper bound of the free-energy and expectation of the observables of the quantum model. I discuss also some experiment and the search of ground state using Monte Carlo Dynamics applied to the equivalent classical two dimensional Ising model constructed by Suzuki et al.6 At the end there is a list of open problems.

  13. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. PMID:20713305

  14. Creativity in design and artificial neural networks

    SciTech Connect

    Neocleous, C.C.; Esat, I.I.; Schizas, C.N.

    1996-12-31

    The creativity phase is identified as an integral part of the design phase. The characteristics of creative persons which are relevant to designing artificial neural networks manifesting aspects of creativity, are identified. Based on these identifications, a general framework of artificial neural network characteristics to implement such a goal are proposed.

  15. Self-organization of neural networks

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Winston, Jeffrey V.; Rafelski, Johann

    1984-05-01

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (“brainwashing”) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conductive to the simulation of memory and learning phenomena.

  16. Advanced telerobotic control using neural networks

    NASA Technical Reports Server (NTRS)

    Pap, Robert M.; Atkins, Mark; Cox, Chadwick; Glover, Charles; Kissel, Ralph; Saeks, Richard

    1993-01-01

    Accurate Automation is designing and developing adaptive decentralized joint controllers using neural networks. We are then implementing these in hardware for the Marshall Space Flight Center PFMA as well as to be usable for the Remote Manipulator System (RMS) robot arm. Our design is being realized in hardware after completion of the software simulation. This is implemented using a Functional-Link neural network.

  17. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  18. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  19. Radiation Behavior of Analog Neural Network Chip

    NASA Technical Reports Server (NTRS)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  20. Applications of Neural Networks in Finance.

    ERIC Educational Resources Information Center

    Crockett, Henry; Morrison, Ronald

    1994-01-01

    Discusses research with neural networks in the area of finance. Highlights include bond pricing, theoretical exposition of primary bond pricing, bond pricing regression model, and an example that created networks with corporate bonds and NeuralWare Neuralworks Professional H software using the back-propagation technique. (LRW)

  1. Neural network based architectures for aerospace applications

    NASA Technical Reports Server (NTRS)

    Ricart, Richard

    1987-01-01

    A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed.

  2. A Survey of Neural Network Publications.

    ERIC Educational Resources Information Center

    Vijayaraman, Bindiganavale S.; Osyk, Barbara

    This paper is a survey of publications on artificial neural networks published in business journals for the period ending July 1996. Its purpose is to identify and analyze trends in neural network research during that period. This paper shows which topics have been heavily researched, when these topics were researched, and how that research has…

  3. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  4. Relabeling exchange method (REM) for learning in neural networks

    NASA Astrophysics Data System (ADS)

    Wu, Wen; Mammone, Richard J.

    1994-02-01

    The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

  5. Design of Jetty Piles Using Artificial Neural Networks

    PubMed Central

    2014-01-01

    To overcome the complication of jetty pile design process, artificial neural networks (ANN) are adopted. To generate the training samples for training ANN, finite element (FE) analysis was performed 50 times for 50 different design cases. The trained ANN was verified with another FE analysis case and then used as a structural analyzer. The multilayer neural network (MBPNN) with two hidden layers was used for ANN. The framework of MBPNN was defined as the input with the lateral forces on the jetty structure and the type of piles and the output with the stress ratio of the piles. The results from the MBPNN agree well with those from FE analysis. Particularly for more complex modes with hundreds of different design cases, the MBPNN would possibly substitute parametric studies with FE analysis saving design time and cost. PMID:25177724

  6. Application of Artificial Neural Networks for estimating index floods

    NASA Astrophysics Data System (ADS)

    Šimor, Viliam; Hlavčová, Kamila; Kohnová, Silvia; Szolgay, Ján

    2012-12-01

    This article presents an application of Artificial Neural Networks (ANNs) and multiple regression models for estimating mean annual maximum discharge (index flood) at ungauged sites. Both approaches were tested for 145 small basins in Slovakia in areas ranging from 20 to 300 km2. Using the objective clustering method, the catchments were divided into ten homogeneous pooling groups; for each pooling group, mutually independent predictors (catchment characteristics) were selected for both models. The neural network was applied as a simple multilayer perceptron with one hidden layer and with a back propagation learning algorithm. Hyperbolic tangents were used as an activation function in the hidden layer. Estimating index floods by the multiple regression models were based on deriving relationships between the index floods and catchment predictors. The efficiencies of both approaches were tested by the Nash-Sutcliffe and a correlation coefficients. The results showed the comparative applicability of both models with slightly better results for the index floods achieved using the ANNs methodology.

  7. Mammographic mass detection using wavelets as input to neural networks.

    PubMed

    Kilic, Niyazi; Gorgel, Pelin; Ucan, Osman N; Sertbas, Ahmet

    2010-12-01

    The objective of this paper is to demonstrate the utility of artificial neural networks, in combination with wavelet transforms for the detection of mammogram masses as malign or benign. A total of 45 patients who had breast masses in their mammography were enrolled in the study. The neural network was trained on the wavelet based feature vectors extracted from the mammogram masses for both benign and malign data. Therefore, in this study, Multilayer ANN was trained with the Backpropagation, Conjugate Gradient and Levenberg-Marquardt algorithms and ten-fold cross validation procedure was used. A satisfying sensitivity percentage of 89.2% was achieved with Levenberg-Marquardt algorithm. Since, this algorithm combines the best features of the Gauss-Newton technique and the other steepest-descent algorithms and thus it reaches desired results very fast. PMID:20703600

  8. An architecture for designing fuzzy logic controllers using neural networks

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1991-01-01

    Described here is an architecture for designing fuzzy controllers through a hierarchical process of control rule acquisition and by using special classes of neural network learning techniques. A new method for learning to refine a fuzzy logic controller is introduced. A reinforcement learning technique is used in conjunction with a multi-layer neural network model of a fuzzy controller. The model learns by updating its prediction of the plant's behavior and is related to the Sutton's Temporal Difference (TD) method. The method proposed here has the advantage of using the control knowledge of an experienced operator and fine-tuning it through the process of learning. The approach is applied to a cart-pole balancing system.

  9. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. PMID:26700535

  10. Fast cosmological parameter estimation using neural networks

    NASA Astrophysics Data System (ADS)

    Auld, T.; Bridges, M.; Hobson, M. P.; Gull, S. F.

    2007-03-01

    We present a method for accelerating the calculation of cosmic microwave background (CMB) power spectra, matter power spectra and likelihood functions for use in cosmological parameter estimation. The algorithm, called COSMONET, is based on training a multilayer perceptron neural network and shares all the advantages of the recently released PICO algorithm of Fendt & Wandelt, but has several additional benefits in terms of simplicity, computational speed, memory requirements and ease of training. We demonstrate the capabilities of COSMONET by computing CMB power spectra over a box in the parameter space of flat Λ cold dark matter (ΛCDM) models containing the 3σ WMAP1-year confidence region. We also use COSMONET to compute the WMAP3-year (WMAP3) likelihood for flat ΛCDM models and show that marginalized posteriors on parameters derived are very similar to those obtained using CAMB and the WMAP3 code. We find that the average error in the power spectra is typically 2-3 per cent of cosmic variance, and that COSMONET is ~7 × 104 faster than CAMB (for flat models) and ~6 × 106 times faster than the official WMAP3 likelihood code. COSMONET and an interface to COSMOMC are publically available at http://www.mrao.cam.ac.uk/software/cosmonet.