Science.gov

Sample records for multilayer neural networks

  1. Target detection using multilayer feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Scherf, Alan V.; Scott, Peter A.

    1991-08-01

    Multilayer feedforward neural networks have been integrated with conventional image processing techniques to form a hybrid target detection algorithm for use in the F/A-18 FLIR pod advanced air-to-air track-while-scan mode. The network has been trained to detect and localize small targets in infrared imagery. Comparative performance between this target detection technique is evaluated.

  2. Membership generation using multilayer neural network

    NASA Technical Reports Server (NTRS)

    Kim, Jaeseok

    1992-01-01

    There has been intensive research in neural network applications to pattern recognition problems. Particularly, the back-propagation network has attracted many researchers because of its outstanding performance in pattern recognition applications. In this section, we describe a new method to generate membership functions from training data using a multilayer neural network. The basic idea behind the approach is as follows. The output values of a sigmoid activation function of a neuron bear remarkable resemblance to membership values. Therefore, we can regard the sigmoid activation values as the membership values in fuzzy set theory. Thus, in order to generate class membership values, we first train a suitable multilayer network using a training algorithm such as the back-propagation algorithm. After the training procedure converges, the resulting network can be treated as a membership generation network, where the inputs are feature values and the outputs are membership values in the different classes. This method allows fairly complex membership functions to be generated because the network is highly nonlinear in general. Also, it is to be noted that the membership functions are generated from a classification point of view. For pattern recognition applications, this is highly desirable, although the membership values may not be indicative of the degree of typicality of a feature value in a particular class.

  3. Extrapolation limitations of multilayer feedforward neural networks

    NASA Technical Reports Server (NTRS)

    Haley, Pamela J.; Soloway, Donald

    1992-01-01

    The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.

  4. Blur identification by multilayer neural network based on multivalued neurons.

    PubMed

    Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T

    2008-05-01

    A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones. PMID:18467216

  5. Incremental communication for multilayer neural networks: error analysis.

    PubMed

    Ghorbani, A A; Bhavsar, V C

    1998-01-01

    Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed (1995) an incremental internode communication method. In the incremental communication method, instead of communicating the full magnitude of the output value of a node, only the increment or decrement to its previous value is sent to a communication link. In this paper, the effects of the limited precision incremental communication method on the convergence behavior and performance of multilayer neural networks are investigated. The nonlinear aspects of representing the incremental values with reduced (limited) precision for the commonly used error backpropagation training algorithm are analyzed. It is shown that the nonlinear effect of small perturbations in the input(s)/output of a node does not cause instability. The analysis is supported by simulation studies of two problems. The simulation results demonstrate that the limited precision errors are bounded and do not seriously affect the convergence of multilayer neural networks. PMID:18252431

  6. Learning with regularizers in multilayer neural networks

    NASA Astrophysics Data System (ADS)

    Saad, David; Rattray, Magnus

    1998-02-01

    We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labeled by a two-layer teacher network with an arbitrary number of hidden units that may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.

  7. Multilayer neural networks with extensively many hidden units.

    PubMed

    Rosen-Zvi, M; Engel, A; Kanter, I

    2001-08-13

    The information processing abilities of a multilayer neural network with a number of hidden units scaling as the input dimension are studied using statistical mechanics methods. The mapping from the input layer to the hidden units is performed by general symmetric Boolean functions, whereas the hidden layer is connected to the output by either discrete or continuous couplings. Introducing an overlap in the space of Boolean functions as order parameter, the storage capacity is found to scale with the logarithm of the number of implementable Boolean functions. The generalization behavior is smooth for continuous couplings and shows a discontinuous transition to perfect generalization for discrete ones. PMID:11497920

  8. Multi-Layer and Recursive Neural Networks for Metagenomic Classification.

    PubMed

    Ditzler, Gregory; Polikar, Robi; Rosen, Gail

    2015-09-01

    Recent advances in machine learning, specifically in deep learning with neural networks, has made a profound impact on fields such as natural language processing, image classification, and language modeling; however, feasibility and potential benefits of the approaches to metagenomic data analysis has been largely under-explored. Deep learning exploits many layers of learning nonlinear feature representations, typically in an unsupervised fashion, and recent results have shown outstanding generalization performance on previously unseen data. Furthermore, some deep learning methods can also represent the structure in a data set. Consequently, deep learning and neural networks may prove to be an appropriate approach for metagenomic data. To determine whether such approaches are indeed appropriate for metagenomics, we experiment with two deep learning methods: i) a deep belief network, and ii) a recursive neural network, the latter of which provides a tree representing the structure of the data. We compare these approaches to the standard multi-layer perceptron, which has been well-established in the machine learning community as a powerful prediction algorithm, though its presence is largely missing in metagenomics literature. We find that traditional neural networks can be quite powerful classifiers on metagenomic data compared to baseline methods, such as random forests. On the other hand, while the deep learning approaches did not result in improvements to the classification accuracy, they do provide the ability to learn hierarchical representations of a data set that standard classification methods do not allow. Our goal in this effort is not to determine the best algorithm in terms accuracy-as that depends on the specific application-but rather to highlight the benefits and drawbacks of each of the approach we discuss and provide insight on how they can be improved for predictive metagenomic analysis. PMID:26316190

  9. Inversion of Self Potential Anomalies with Multilayer Perceptron Neural Networks

    NASA Astrophysics Data System (ADS)

    Kaftan, Ilknur; Sındırgı, Petek; Akdemir, Özer

    2014-08-01

    This study investigates the inverse solution on a buried and polarized sphere-shaped body using the self-potential method via multilayer perceptron neural networks (MLPNN). The polarization angle ( α), depth to the centre of sphere ( h), electrical dipole moment ( K) and the zero distance from the origin ( x 0) were estimated. For testing the success of the MLPNN for sphere model, parameters were also estimated by the traditional Damped Least Squares (Levenberg-Marquardt) inversion technique (DLS). The MLPNN was first tested on a synthetic example. The performance of method was also tested for two S/N ratios (5 % and 10 %) by adding noise to the same synthetic data, the estimated model parameters with MLPNN and DLS method are satisfactory. The MLPNN also applied for the field data example in İzmir, Urla district, Turkey, with two cross-section data evaluated by MLPNN and DLS, and the two methods showed good agreement.

  10. Optical proximity correction using a multilayer perceptron neural network

    NASA Astrophysics Data System (ADS)

    Luo, Rui

    2013-07-01

    Optical proximity correction (OPC) is one of the resolution enhancement techniques (RETs) in optical lithography, where the mask pattern is modified to improve the output pattern fidelity. Algorithms are needed to generate the modified mask pattern automatically and efficiently. In this paper, a multilayer perceptron (MLP) neural network (NN) is used to synthesize the mask pattern. We employ the pixel-based approach in this work. The MLP takes the pixel values of the desired output wafer pattern as input, and outputs the optimal mask pixel values. The MLP is trained with the backpropagation algorithm, with a training set retrieved from the desired output pattern, and the optimal mask pattern obtained by the model-based method. After training, the MLP is able to generate the optimal mask pattern non-iteratively with good pattern fidelity.

  11. Robust local stability of multilayer recurrent neural networks.

    PubMed

    Suykens, J K; De Moor, B; Vandewalle, J

    2000-01-01

    In this paper we derive a condition for robust local stability of multilayer recurrent neural networks with two hidden layers. The stability condition follows from linking theory about linearization, robustness analysis of linear systems under nonlinear perturbation and matrix inequalities. A characterization of the basin of attraction of the origin is given in terms of the level set of a quadratic Lyapunov function. In a similar way like for NL theory, local stability is imposed around the origin and the apparent basin of attraction is made large by applying the criterion, while the proven basin of attraction is relatively small due to conservatism of the criterion. Modifying dynamic backpropagation by the new stability condition is discussed and illustrated by simulation examples. PMID:18249754

  12. Parallel multilayer perceptron neural network used for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Garcia-Salgado, Beatriz P.; Ponomaryov, Volodymyr I.; Robles-Gonzalez, Marco A.

    2016-04-01

    This study is focused on time optimization for the classification problem presenting a comparison of five Artificial Neural Network Multilayer Perceptron (ANN-MLP) architectures. We use the Artificial Neural Network (ANN) because it allows to recognize patterns in data in a lower time rate. Time and classification accuracy are taken into account together for the comparison. According to time comparison, two paradigms in the computational field for each ANN-MLP architecture are analysed with three schemes. Firstly, sequential programming is applied by using a single CPU core. Secondly, parallel programming is employed over a multi-core CPU architecture. Finally, a programming model running on GPU architecture is implemented. Furthermore, the classification accuracy is compared between the proposed five ANN-MLP architectures and a state-of.the-art Support Vector Machine (SVM) with three classification frames: 50%,60% and 70% of the data set's observations are randomly selected to train the classifiers. Also, a visual comparison of the classified results is presented. The Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) criteria are also calculated to characterise visual perception. The images employed were acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), the Reflective Optics System Imaging Spectrometer (ROSIS) and the Hyperion sensor.

  13. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks

    PubMed Central

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  14. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    PubMed

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  15. Unsupervised classification of neural spikes with a hybrid multilayer artificial neural network.

    PubMed

    García, P; Suárez, C P; Rodríguez, J; Rodríguez, M

    1998-07-01

    The understanding of the brain structure and function and its computational style is one of the biggest challenges both in Neuroscience and Neural Computation. In order to reach this and to test the predictions of neural network modeling, it is necessary to observe the activity of neural populations. In this paper we propose a hybrid modular computational system for the spike classification of multiunits recordings. It works with no knowledge about the waveform, and it consists of two moduli: a Preprocessing (Segmentation) module, which performs the detection and centering of spike vectors using programmed computation; and a Processing (Classification) module, which implements the general approach of neural classification: feature extraction, clustering and discrimination, by means of a hybrid unsupervised multilayer artificial neural network (HUMANN). The operations of this artificial neural network on the spike vectors are: (i) compression with a Sanger Layer from 70 points vector to five principal component vector; (ii) their waveform is analyzed by a Kohonen layer; (iii) the electrical noise and overlapping spikes are rejected by a previously unreported artificial neural network named Tolerance layer; and (iv) finally the spikes are labeled into spike classes by a Labeling layer. Each layer of the system has a specific unsupervised learning rule that progressively modifies itself until the performance of the layer has been automatically optimized. The procedure showed a high sensitivity and specificity also when working with signals containing four spike types. PMID:10223516

  16. Classification of fuels using multilayer perceptron neural networks

    NASA Astrophysics Data System (ADS)

    Ozaki, Sérgio T. R.; Wiziack, Nadja K. L.; Paterno, Leonardo G.; Fonseca, Fernando J.

    2009-05-01

    Electrical impedance data obtained with an array of conducting polymer chemical sensors was used by a neural network (ANN) to classify fuel adulteration. Real samples were classified with accuracy greater than 90% in two groups: approved and adulterated.

  17. Classification of fuels using multilayer perceptron neural networks

    SciTech Connect

    Ozaki, Sergio T. R.; Wiziack, Nadja K. L.; Paterno, Leonardo G.; Fonseca, Fernando J.

    2009-05-23

    Electrical impedance data obtained with an array of conducting polymer chemical sensors was used by a neural network (ANN) to classify fuel adulteration. Real samples were classified with accuracy greater than 90% in two groups: approved and adulterated.

  18. When are two multi-layer cellular neural networks the same?

    PubMed

    Ban, Jung-Chao; Chang, Chih-Hung

    2016-07-01

    This paper aims to characterize whether a multi-layer cellular neural network is of deep architecture; namely, when can an n-layer cellular neural network be replaced by an m-layer cellular neural network for mnetwork is revealed. PMID:27085113

  19. On the capacity of multilayer neural networks trained with backpropagation.

    PubMed

    Miranda, E N

    2000-08-01

    The capacity of a layered neural network for learning hetero-associations is studied numerically as a function of the number M of hidden neurons. We find that there is a sharp change in the learning ability of the network as the number of hetero-associations increases. This fact allows us to define a maximum capacity C for a given architecture. It is found that C grows logarithmically with M. PMID:11052415

  20. Neural networks and chaos: Construction, evaluation of chaotic networks, and prediction of chaos with multilayer feedforward networks

    NASA Astrophysics Data System (ADS)

    Bahi, Jacques M.; Couchot, Jean-François; Guyeux, Christophe; Salomon, Michel

    2012-03-01

    Many research works deal with chaotic neural networks for various fields of application. Unfortunately, up to now, these networks are usually claimed to be chaotic without any mathematical proof. The purpose of this paper is to establish, based on a rigorous theoretical framework, an equivalence between chaotic iterations according to Devaney and a particular class of neural networks. On the one hand, we show how to build such a network, on the other hand, we provide a method to check if a neural network is a chaotic one. Finally, the ability of classical feedforward multilayer perceptrons to learn sets of data obtained from a dynamical system is regarded. Various boolean functions are iterated on finite states. Iterations of some of them are proven to be chaotic as it is defined by Devaney. In that context, important differences occur in the training process, establishing with various neural networks that chaotic behaviors are far more difficult to learn.

  1. Weight-decay induced phase transitions in multilayer neural networks

    NASA Astrophysics Data System (ADS)

    Ahr, M.; Biehl, M.; Schlösser, E.

    1999-07-01

    We investigate layered neural networks with differentiable activation function and student vectors without normalization constraint by means of equilibrium statistical physics. We consider the learning of perfectly realizable rules and find that the length of student vectors becomes infinite, unless a proper weight decay term is added to the energy. Then, the system undergoes a first-order phase transition between states with very long student vectors and states where the lengths are comparable to those of the teacher vectors. Additionally, in both configurations there is a phase transition between a specialized and an unspecialized phase. An anti-specialized phase with long student vectors exists in networks with a small number of hidden units.

  2. Geomagnetic storms prediction from InterMagnetic Observatories data using the Multilayer Perceptron neural network

    NASA Astrophysics Data System (ADS)

    Ouadfeul, S.; Aliouane, L.; Tourtchine, V.

    2013-09-01

    In this paper, a tentative of geomagnetic storms prediction is implanted by analyzing the International Real-Time Magnetic Observatory Network data using the Artificial Neural Network (ANN). The implanted method is based on the prediction of future horizontal geomagnetic field component using a Multilayer Perceptron (MLP) neural network model. The input is the time and the output is the X and Y magnetic field components. Application to geomagnetic data of Mai 2002 shows that the implanted ANN model can greatly help the geomagnetic storms prediction.

  3. Incorporation of liquid-crystal light valve nonlinearities in optical multilayer neural networks.

    PubMed

    Moerland, P D; Fiesler, E; Saxena, I

    1996-09-10

    Sigmoidlike activation functions, as available in analog hardware, differ in various ways from the standard sigmoidal function because they are usually asymmetric, truncated, and have a nonstandard gain. We present an adaptation of the backpropagation learning rule to compensate for these nonstandard sigmoids. This method is applied to multilayer neural networks with all-optical forward propagation and liquid-crystal light valves (LCLV) as optical thresholding devices. The results of simulations of a backpropagation neural network with five different LCLV response curves as activation functions are presented. Although LCLV's perform poorly with the standard backpropagation algorithm, it is shown that our adapted learning rule performs well with these LCLV curves. PMID:21127522

  4. Multi-layer neural networks for robot control

    NASA Technical Reports Server (NTRS)

    Pourboghrat, Farzad

    1989-01-01

    Two neural learning controller designs for manipulators are considered. The first design is based on a neural inverse-dynamics system. The second is the combination of the first one with a neural adaptive state feedback system. Both types of controllers enable the manipulator to perform any given task very well after a period of training and to do other untrained tasks satisfactorily. The second design also enables the manipulator to compensate for unpredictable perturbations.

  5. Optimal Parameter for the Training of Multilayer Perceptron Neural Networks by Using Hierarchical Genetic Algorithm

    SciTech Connect

    Orozco-Monteagudo, Maykel; Taboada-Crispi, Alberto; Gutierrez-Hernandez, Liliana

    2008-11-06

    This paper deals with the controversial topic of the selection of the parameters of a genetic algorithm, in this case hierarchical, used for training of multilayer perceptron neural networks for the binary classification. The parameters to select are the crossover and mutation probabilities of the control and parametric genes and the permanency percent. The results can be considered as a guide for using this kind of algorithm.

  6. Existence and stability of traveling wave solutions for multilayer cellular neural networks

    NASA Astrophysics Data System (ADS)

    Hsu, Cheng-Hsiung; Lin, Jian-Jhong; Yang, Tzi-Sheng

    2015-08-01

    The purpose of this article is to investigate the existence and stability of traveling wave solutions for one-dimensional multilayer cellular neural networks. We first establish the existence of traveling wave solutions using the truncated technique. Then we study the asymptotic behaviors of solutions for the Cauchy problem of the neural model. Applying two kinds of comparison principles and the weighed energy method, we show that all solutions of the Cauchy problem converge exponentially to the traveling wave solutions provided that the initial data belong to a suitable weighted space.

  7. Application of Multilayer Feedforward Neural Networks to Precipitation Cell-Top Altitude Estimation

    NASA Technical Reports Server (NTRS)

    Spina, Michelle S.; Schwartz, Michael J.; Staelin, David H.; Gasiewski, Albin J.

    1998-01-01

    The use of passive 118-GHz O2 observations of rain cells for precipitation cell-top altitude estimation is demonstrated by using a multilayer feed forward neural network retrieval system. Rain cell observations at 118 GHz were compared with estimates of the cell-top altitude obtained by optical stereoscopy. The observations were made with 2 4 km horizontal spatial resolution by using the Millimeter-wave Temperature Sounder (MTS) scanning spectrometer aboard the NASA ER-2 research aircraft during the Genesis of Atlantic Lows Experiment (GALE) and the COoperative Huntsville Meteorological EXperiment (COHMEX) in 1986. The neural network estimator applied to MTS spectral differences between clouds, and nearby clear air yielded an rms discrepancy of 1.76 km for a combined cumulus, mature, and dissipating cell set and 1.44 km for the cumulus-only set. An improvement in rms discrepancy to 1.36 km was achieved by including additional MTS information on the absolute atmospheric temperature profile. An incremental method for training neural networks was developed that yielded robust results, despite the use of as few as 56 training spectra. Comparison of these results with a nonlinear statistical estimator shows that superior results can be obtained with a neural network retrieval system. Imagery of estimated cell-top altitudes was created from 118-GHz spectral imagery gathered from CAMEX, September through October 1993, and from cyclone Oliver, February 7, 1993.

  8. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN. PMID:18263301

  9. Classification of normal and abnormal electrogastrograms using multilayer feedforward neural networks.

    PubMed

    Lin, Z; Maris, J; Hermans, L; Vandewalle, J; Chen, J D

    1997-05-01

    A neural network approach is proposed for the automated classification of the normal and abnormal EGG. Two learning algorithms, the quasi-Newton and the scaled conjugate gradient method for the multilayer feedforward neural networks (MFNN), are introduced and compared with the error backpropagation algorithm. The configurations of the MFNN are determined by experiment. The raw EGG data, its power spectral data, and its autoregressive moving average (ARMA) modelling parameters are used as the input to the MFNN and compared with each other. Three indexes (the percent correct, sum-squared error and complexity per iteration) are used to evaluate the performance of each learning algorithm. The results show that the scaled conjugate gradient algorithm performs best, in that it is robust and provides a super-linear convergence rate. The power spectral representation and the ARMA modelling parameters of the EGG are found to be better types of the input to the network for this specific application, both yielding a percent correctness of 95% on the test set. Although the results are focused on the classification of the EGG, this paper should provide useful information for the classification of other biomedical signals. PMID:9246852

  10. Analysis of (7)Be behaviour in the air by using a multilayer perceptron neural network.

    PubMed

    Samolov, A; Dragović, S; Daković, M; Bačić, G

    2014-11-01

    A multilayer perceptron artificial neural network (ANN) model for the prediction of the (7)Be behaviour in the air as the function of meteorological parameters was developed. The model was optimized and tested using (7)Be activity concentrations obtained by standard gamma-ray spectrometric analysis of air samples collected in Belgrade (Serbia) during 2009-2011 and meteorological data for the same period. Good correlation (r = 0.91) between experimental values of (7)Be activity concentrations and those predicted by ANN was obtained. The good performance of the model in prediction of (7)Be activity concentrations could provide basis for construction of models which would forecast behaviour of other airborne radionuclides. PMID:25106024

  11. Near-infrared spectroscopic measurements of blood analytes using multi-layer perceptron neural networks.

    PubMed

    Kalamatianos, Dimitrios; Liatsis, Panos; Wellstead, Peter E

    2006-01-01

    Near-infrared (NIR) spectroscopy is being applied to the solution of problems in many areas of biomedical and pharmaceutical research. In this paper we investigate the use of NIR spectroscopy as an analytical tool to quantify concentrations of urea, creatinine, glucose and oxyhemoglobin (HbO2). Measurements have been made in vitro with a portable spectrometer developed in our labs that consists of a two beam interferometer operating in the range of 800-2300 nm. For the data analysis a pattern recognition philosophy was used with a preprocessing stage and a multi-layer perceptron (MLP) neural network for the measurement stage. Results show that the interferogram signatures of the above compounds are sufficiently strong in that spectral range. Measurements of three different concentrations were possible with mean squared error (MSE) of the order of 10(-6). PMID:17947035

  12. A selective learning method to improve the generalization of multilayer feedforward neural networks.

    PubMed

    Galván, I M; Isasi, P; Aler, R; Valls, J M

    2001-04-01

    Multilayer feedforward neural networks with backpropagation algorithm have been used successfully in many applications. However, the level of generalization is heavily dependent on the quality of the training data. That is, some of the training patterns can be redundant or irrelevant. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. Nevertheless, generalization is carried out independently of the novel patterns to be approximated. In this paper, we present a learning method that automatically selects the training patterns more appropriate to the new sample to be predicted. This training method follows a lazy learning strategy, in the sense that it builds approximations centered around the novel sample. The proposed method has been applied to three different domains: two artificial approximation problems and a real time series prediction problem. Results have been compared to standard backpropagation using the complete training data set and the new method shows better generalization abilities. PMID:14632169

  13. Intelligent detection of impulse noise using multilayer neural network with multi-valued neurons

    NASA Astrophysics Data System (ADS)

    Aizenberg, Igor; Wallace, Glen

    2012-03-01

    In this paper, we solve the impulse noise detection problem using an intelligent approach. We use a multilayer neural network based on multi-valued neurons (MLMVN) as an intelligent impulse noise detector. MLMVN was already used for point spread function identification and intelligent edge enhancement. So it is very attractive to apply it for solving another image processing problem. The main result, which is presented in the paper, is the proven ability of MLMVN to detect impulse noise on different images after a learning session with the data taken just from a single noisy image. Hence MLMVN can be used as a robust impulse detector. It is especially efficient for salt and pepper noise detection and outperforms all competitive techniques. It also shows comparable results in detection of random impulse noise. Moreover, for random impulse noise detection, MLMVN with the output neuron with a periodic activation function is used for the first time.

  14. A design philosophy for multi-layer neural networks with applications to robot control

    NASA Technical Reports Server (NTRS)

    Vadiee, Nader; Jamshidi, MO

    1989-01-01

    A system is proposed which receives input information from many sensors that may have diverse scaling, dimension, and data representations. The proposed system tolerates sensory information with faults. The proposed self-adaptive processing technique has great promise in integrating the techniques of artificial intelligence and neural networks in an attempt to build a more intelligent computing environment. The proposed architecture can provide a detailed decision tree based on the input information, information stored in a long-term memory, and the adapted rule-based knowledge. A mathematical model for analysis will be obtained to validate the cited hypotheses. An extensive software program will be developed to simulate a typical example of pattern recognition problem. It is shown that the proposed model displays attention, expectation, spatio-temporal, and predictory behavior which are specific to the human brain. The anticipated results of this research project are: (1) creation of a new dynamic neural network structure, and (2) applications to and comparison with conventional multi-layer neural network structures. The anticipated benefits from this research are vast. The model can be used in a neuro-computer architecture as a building block which can perform complicated, nonlinear, time-varying mapping from a multitude of input excitory classes to an output or decision environment. It can be used for coordinating different sensory inputs and past experience of a dynamic system and actuating signals. The commercial applications of this project can be the creation of a special-purpose neuro-computer hardware which can be used in spatio-temporal pattern recognitions in such areas as air defense systems, e.g., target tracking, and recognition. Potential robotics-related applications are trajectory planning, inverse dynamics computations, hierarchical control, task-oriented control, and collision avoidance.

  15. A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks.

    PubMed

    Cavalieri, Salvatore; Mirabella, Orazio

    1999-01-01

    The paper deals with the problem of fault tolerance in a multilayer perceptron network. Although it already possesses a reasonable fault tolerance capability, it may be insufficient in particularly critical applications. Studies carried out by the authors have shown that the traditional backpropagation learning algorithm may entail the presence of a certain number of weights with a much higher absolute value than the others. Further studies have shown that faults in these weights is the main cause of deterioration in the performance of the neural network. In other words, the main cause of incorrect network functioning on the occurrence of a fault is the non-uniform distribution of absolute values of weights in each layer. The paper proposes a learning algorithm which updates the weights, distributing their absolute values as uniformly as possible in each layer. Tests performed on benchmark test sets have shown the considerable increase in fault tolerance obtainable with the proposed approach as compared with the traditional backpropagation algorithm and with some of the most efficient fault tolerance approaches to be found in literature. PMID:12662719

  16. Portraying emotions at their unfolding: a multilayered approach for probing dynamics of neural networks.

    PubMed

    Raz, Gal; Winetraub, Yonatan; Jacob, Yael; Kinreich, Sivan; Maron-Katz, Adi; Shaham, Galit; Podlipsky, Ilana; Gilam, Gadi; Soreq, Eyal; Hendler, Talma

    2012-04-01

    Dynamic functional integration of distinct neural systems plays a pivotal role in emotional experience. We introduce a novel approach for studying emotion-related changes in the interactions within and between networks using fMRI. It is based on continuous computation of a network cohesion index (NCI), which is sensitive to both strength and variability of signal correlations between pre-defined regions. The regions encompass three clusters (namely limbic, medial prefrontal cortex (mPFC) and cognitive), each previously was shown to be involved in emotional processing. Two sadness-inducing film excerpts were viewed passively, and comparisons between viewer's rated sadness, parasympathetic, and inter-NCI and intra-NCI were obtained. Limbic intra-NCI was associated with reported sadness in both movies. However, the correlation between the parasympathetic-index, the rated sadness and the limbic-NCI occurred in only one movie, possibly related to a "deactivated" pattern of sadness. In this film, rated sadness intensity also correlated with the mPFC intra-NCI, possibly reflecting temporal correspondence between sadness and sympathy. Further, only for this movie, we found an association between sadness rating and the mPFC-limbic inter-NCI time courses. To the contrary, in the other film in which sadness was reported to commingle with horror and anger, dramatic events coincided with disintegration of these networks. Together, this may point to a difference between the cinematic experiences with regard to inter-network dynamics related to emotional regulation. These findings demonstrate the advantage of a multi-layered dynamic analysis for elucidating the uniqueness of emotional experiences with regard to an unguided processing of continuous and complex stimulation. PMID:22285693

  17. Planes coordinates transformation between PSAD56 to SIRGAS using a Multilayer Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Tierra, Alfonso; Romero, Ricardo

    2014-12-01

    Prior any satellite technology developments, the geodetic networks of a country were realized from a topocentric datum, and hence the respective cartography was performed. With availability of Global Navigation Satellite Systems-GNSS, cartography needs to be updated and referenced to a geocentric datum to be compatible with this technology. Cartography in Ecuador has been performed using the PSAD56 (Provisional South American Datum 1956) systems, nevertheless it's necessary to have inside the system SIRGAS (SIstema de Referencia Geocéntrico para las AmericaS). This transformation between PSAD56 to SIRGAS use seven transformation parameters calculated with the method Helmert. These parameters, in case of Ecuador are compatible for scales of 1:25 000 or less, that does not satisfy the requirements on applications for major scales. In this study, the technique of neural networks is demonstrated as an alternative for improving the processing of UTM planes coordinates E, N (East, North) from PSAD56 to SIRGAS. Therefore, from the coordinates E, N, of the two systems, four transformation parameters were calculated (two of translation, one of rotation, and one scale difference) using the technique bidimensional transformation. Additionally, the same coordinates were used to training Multilayer Artificial Neural Network -MANN, in which the inputs are the coordinates E, N in PSAD56 and output are the coordinates E, N in SIRGAS. Both the two-dimensional transformation and ANN were used as control points to determine the differences between the mentioned methods. The results imply that, the coordinates transformation obtained with the artificial neural network multilayer trained have been improving the results that the bidimensional transformation, and compatible to scales 1:5000. Dostęp do nowoczesnych technologii, w tym GNSS umożliwiły dokładniejsze zdefi niowanie systemów odniesień przestrzennych wykorzystywanych m.in. w defi niowaniu krajowych układów odniesień i

  18. The No-Prop algorithm: a new learning algorithm for multilayer neural networks.

    PubMed

    Widrow, Bernard; Greenblatt, Aaron; Kim, Youngsik; Park, Dookun

    2013-01-01

    A new learning algorithm for multilayer neural networks that we have named No-Propagation (No-Prop) is hereby introduced. With this algorithm, the weights of the hidden-layer neurons are set and fixed with random values. Only the weights of the output-layer neurons are trained, using steepest descent to minimize mean square error, with the LMS algorithm of Widrow and Hoff. The purpose of introducing nonlinearity with the hidden layers is examined from the point of view of Least Mean Square Error Capacity (LMS Capacity), which is defined as the maximum number of distinct patterns that can be trained into the network with zero error. This is shown to be equal to the number of weights of each of the output-layer neurons. The No-Prop algorithm and the Back-Prop algorithm are compared. Our experience with No-Prop is limited, but from the several examples presented here, it seems that the performance regarding training and generalization of both algorithms is essentially the same when the number of training patterns is less than or equal to LMS Capacity. When the number of training patterns exceeds Capacity, Back-Prop is generally the better performer. But equivalent performance can be obtained with No-Prop by increasing the network Capacity by increasing the number of neurons in the hidden layer that drives the output layer. The No-Prop algorithm is much simpler and easier to implement than Back-Prop. Also, it converges much faster. It is too early to definitively say where to use one or the other of these algorithms. This is still a work in progress. PMID:23140797

  19. Prediction for energy content of Taiwan municipal solid waste using multilayer perceptron neural networks.

    PubMed

    Shu, Hung-Yee; Lu, Hsin-Chung; Fan, Huan-Jung; Chang, Ming-Chin; Chen, Jyh-Cherng

    2006-06-01

    In the past decade, the treatment amount of municipal solid waste (MSW) by incineration has increased significantly in Taiwan. By year 2008, approximately 70% of the total MSW generated will be incinerated. The energy content (usually expressed by lower heating value [LHV]) of MSW is an important parameter for the selection of incinerator capacity. In this work, wastes from 55 sampling sites, including villages, towns, cities, and remote islands in the Taiwan area, were sampled and analyzed once a season from April 2002 to March 2003 to determine the waste characteristics. The LHV of MSW in Taiwan was predicted by the multilayer perceptron (MLP) neural networks model using the input parameters of elemental analysis and dry- or wet-base physical compositions. Although all three of the models predicted LHV values rather accurately, the elemental analysis model provided the most accurate prediction of LHV values. Additionally, the wet-base physical composition model was the easiest and most economical. Therefore, the waste treatment operators can choose the more appropriate analysis method considering situations themselves, such as time, equipment, technology, and cost. PMID:16805410

  20. Memristor-based multilayer neural networks with online gradient descent training.

    PubMed

    Soudry, Daniel; Di Castro, Dotan; Gal, Asaf; Kolodny, Avinoam; Kvatinsky, Shahar

    2015-10-01

    Learning in multilayer neural networks (MNNs) relies on continuous updating of large matrices of synaptic weights by local rules. Such locality can be exploited for massive parallelism when implementing MNNs in hardware. However, these update rules require a multiply and accumulate operation for each synaptic weight, which is challenging to implement compactly using CMOS. In this paper, a method for performing these update operations simultaneously (incremental outer products) using memristor-based arrays is proposed. The method is based on the fact that, approximately, given a voltage pulse, the conductivity of a memristor will increment proportionally to the pulse duration multiplied by the pulse magnitude if the increment is sufficiently small. The proposed method uses a synaptic circuit composed of a small number of components per synapse: one memristor and two CMOS transistors. This circuit is expected to consume between 2% and 8% of the area and static power of previous CMOS-only hardware alternatives. Such a circuit can compactly implement hardware MNNs trainable by scalable algorithms based on online gradient descent (e.g., backpropagation). The utility and robustness of the proposed memristor-based circuit are demonstrated on standard supervised learning tasks. PMID:25594981

  1. Multilayer perceptron neural network for downscaling rainfall in arid region: A case study of Baluchistan, Pakistan

    NASA Astrophysics Data System (ADS)

    Ahmed, Kamal; Shahid, Shamsuddin; Haroon, Sobri Bin; Xiao-jun, Wang

    2015-08-01

    Downscaling rainfall in an arid region is much challenging compared to wet region due to erratic and infrequent behaviour of rainfall in the arid region. The complexity is further aggregated due to scarcity of data in such regions. A multilayer perceptron (MLP) neural network has been proposed in the present study for the downscaling of rainfall in the data scarce arid region of Baluchistan province of Pakistan, which is considered as one of the most vulnerable areas of Pakistan to climate change. The National Center for Environmental Prediction (NCEP) reanalysis datasets from 20 grid points surrounding the study area were used to select the predictors using principal component analysis. Monthly rainfall data for the time periods 1961-1990 and 1991-2001 were used for the calibration and validation of the MLP model, respectively. The performance of the model was assessed using various statistics including mean, variance, quartiles, root mean square error (RMSE), mean bias error (MBE), coefficient of determination (R 2) and Nash-Sutcliffe efficiency (NSE). Comparisons of mean monthly time series of observed and downscaled rainfall showed good agreement during both calibration and validation periods, while the downscaling model was found to underpredict rainfall variance in both periods. Other statistical parameters also revealed good agreement between observed and downscaled rainfall during both calibration and validation periods in most of the stations.

  2. Multi-layer holographic bifurcative neural network system for real-time adaptive EOS data analysis

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Huang, K.; Diep, J.

    1992-01-01

    Optical data processing techniques have the inherent advantage of high data throughout, low weight and low power requirements. These features are particularly desirable for onboard spacecraft in-situ real-time data analysis and data compression applications. The proposed multi-layer optical holographic neural net pattern recognition technique will utilize the nonlinear photorefractive devices for real-time adaptive learning to classify input data content and recognize unexpected features. Information can be stored either in analog or digital form in a nonlinear photorefractive device. The recording can be accomplished in time scales ranging from milliseconds to microseconds. When a system consisting of these devices is organized in a multi-layer structure, a feed forward neural net with bifurcating data classification capability is formed. The interdisciplinary research will involve the collaboration with top digital computer architecture experts at the University of Southern California.

  3. Multi-layer holographic bifurcative neural network system for real-time adaptive EOS data analysis

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Huang, K. S.; Diep, J.

    1993-01-01

    Optical data processing techniques have the inherent advantage of high data throughout, low weight and low power requirements. These features are particularly desirable for onboard spacecraft in-situ real-time data analysis and data compression applications. the proposed multi-layer optical holographic neural net pattern recognition technique will utilize the nonlinear photorefractive devices for real-time adaptive learning to classify input data content and recognize unexpected features. Information can be stored either in analog or digital form in a nonlinear photofractive device. The recording can be accomplished in time scales ranging from milliseconds to microseconds. When a system consisting of these devices is organized in a multi-layer structure, a feedforward neural net with bifurcating data classification capability is formed. The interdisciplinary research will involve the collaboration with top digital computer architecture experts at the University of Southern California.

  4. Adaptive Weibull Multiplicative Model and Multilayer Perceptron Neural Networks for Dark-Spot Detection from SAR Imagery

    PubMed Central

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  5. Adaptive Weibull Multiplicative Model and Multilayer Perceptron neural networks for dark-spot detection from SAR imagery.

    PubMed

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  6. Multilayer cellular neural network and fuzzy C-mean classifiers: comparison and performance analysis

    NASA Astrophysics Data System (ADS)

    Trujillo San-Martin, Maite; Hlebarov, Vejen; Sadki, Mustapha

    2004-11-01

    Neural Networks and Fuzzy systems are considered two of the most important artificial intelligent algorithms which provide classification capabilities obtained through different learning schemas which capture knowledge and process it according to particular rule-based algorithms. These methods are especially suited to exploit the tolerance for uncertainty and vagueness in cognitive reasoning. By applying these methods with some relevant knowledge-based rules extracted using different data analysis tools, it is possible to obtain a robust classification performance for a wide range of applications. This paper will focus on non-destructive testing quality control systems, in particular, the study of metallic structures classification according to the corrosion time using a novel cellular neural network architecture, which will be explained in detail. Additionally, we will compare these results with the ones obtained using the Fuzzy C-means clustering algorithm and analyse both classifiers according to its classification capabilities.

  7. Control of Multilayer Networks

    PubMed Central

    Menichetti, Giulia; Dall’Asta, Luca; Bianconi, Ginestra

    2016-01-01

    The controllability of a network is a theoretical problem of relevance in a variety of contexts ranging from financial markets to the brain. Until now, network controllability has been characterized only on isolated networks, while the vast majority of complex systems are formed by multilayer networks. Here we build a theoretical framework for the linear controllability of multilayer networks by mapping the problem into a combinatorial matching problem. We found that correlating the external signals in the different layers can significantly reduce the multiplex network robustness to node removal, as it can be seen in conjunction with a hybrid phase transition occurring in interacting Poisson networks. Moreover we observe that multilayer networks can stabilize the fully controllable multiplex network configuration that can be stable also when the full controllability of the single network is not stable. PMID:26869210

  8. Regional application of multi-layer artificial neural networks in 3-D ionosphere tomography

    NASA Astrophysics Data System (ADS)

    Ghaffari Razin, Mir Reza; Voosoghi, Behzad

    2016-08-01

    Tomography is a very cost-effective method to study physical properties of the ionosphere. In this paper, residual minimization training neural network (RMTNN) is used in voxel-based tomography to reconstruct of 3-D ionosphere electron density with high spatial resolution. For numerical experiments, observations collected at 37 GPS stations from Iranian permanent GPS network (IPGN) are used. A smoothed TEC approach was used for absolute STEC recovery. To improve the vertical resolution, empirical orthogonal functions (EOFs) obtained from international reference ionosphere 2012 (IRI-2012) used as object function in training neural network. Ionosonde observations is used for validate reliability of the proposed method. Minimum relative error for RMTNN is 1.64% and maximum relative error is 15.61%. Also root mean square error (RMSE) of 0.17 × 1011 (electrons/m3) is computed for RMTNN which is less than RMSE of IRI2012. The results show that RMTNN has higher accuracy and compiles speed than other ionosphere reconstruction methods.

  9. Multilayer neural networks for solving a class of partial differential equations.

    PubMed

    He, S; Reif, K; Unbehauen, R

    2000-04-01

    In this paper, training the derivative of a feedforward neural network with the extended backpropagation algorithm is presented. The method is used to solve a class of first-order partial differential equations for input-to-state linearizable or approximate linearizable systems. The solution of the differential equation, together with the Lie derivatives, yields a change of coordinates. A feedback control law is then designed to keep the system in a desired behavior. The examination of the proposed method, through simulations, exhibits the advantages of it. They include easily and quickly finding approximate solutions for complicated first-order partial differential equations. Therefore, the work presented here can benefit the design of the class of nonlinear control systems, where the nontrivial solutions of the partial differential equations are difficult to find. PMID:10937971

  10. Neural Networks

    SciTech Connect

    Smith, Patrick I.

    2003-09-23

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  11. Propagation of firing rate by synchronization and coherence of firing pattern in a feed-forward multilayer neural network

    NASA Astrophysics Data System (ADS)

    Yi, Ming; Yang, Lijian

    2010-06-01

    When neurons in layer 1 fire irregularly under stochastic noise, it is found synchronous firings can develop gradually in latter layers within a feed-forward multilayer neural network, which is consistent with experimental findings. The underlying mechanism of propagation of firing rate is explored, then rate encoding realized by synchronization is clarified. Furthermore, the effects of connection probability between nearest layers, stochastic noise, and ratio of inhibitory connections to total connection on (i) propagation of firing rate by synchronization and (ii) coherence of firing pattern are investigated, respectively. It is observed that (i) there is a threshold for connection probability, beyond which firing rate of each layer can propagate successfully through the whole network by synchronization. The dependence of firing rate on layer index is very different for different connection probability. In addition, larger the connection probability is, more rapidly the synchrony is built up. (ii) Increasing intensity of stochastic noise enhances firing rate in output layer. Stochastic noise plays a constructive role in improving synchrony by causing the synchronization more quickly. (iii) The inhibitory connection offsets excitatory input therefore reduces firing rate and synchrony. As layer index increases, coherence measure goes through a peak, i.e., the coherence of firing pattern is the worst at certain a layer. With increasing the ratio of inhibitory connections, the variability of firing train is enhanced, exhibiting destructive role of inhibitory connections on coherence of firing pattern.

  12. Aitken-based acceleration methods for assessing convergence of multilayer neural networks.

    PubMed

    Pilla, R S; Kamarthi, S V; Lindsay, B G

    2001-01-01

    This paper first develops the ideas of Aitken delta(2) method to accelerate the rate of convergence of an error sequence (value of the objective function at each step) obtained by training a neural network with a sigmoidal activation function via the backpropagation algorithm. The Aitken method is exact when the error sequence is exactly geometric. However, theoretical and empirical evidence suggests that the best possible rate of convergence obtainable for such an error sequence is log-geometric. This paper develops a new invariant extended-Aitken acceleration method for accelerating log-geometric sequences. The resulting accelerated sequence enables one to predict the final value of the error function. These predictions can in turn be used to assess the distance between the current and final solution and thereby provides a stopping criterion for a desired accuracy. Each of the techniques described is applicable to a wide range of problems. The invariant extended-Aitken acceleration approach shows improved acceleration as well as outstanding prediction of the final error in the practical problems considered. PMID:18249928

  13. Multilayered perceptron neural networks to compute energy losses in magnetic cores

    NASA Astrophysics Data System (ADS)

    Kucuk, Ilker

    2006-12-01

    This paper presents a new approach based on multilayered perceptrons (MLPs) to compute the specific energy losses of toroidal wound cores built from 3% SiFe 0.27 mm thick M4, 0.1 and 0.08 mm thin gauge electrical steel strips. The MLP has been trained by a back-propagation and extended delta-bar-delta learning algorithm. The results obtained by using the MLP model were compared with a commonly used conventional method. The comparison has shown that the proposed model improved loss estimation with respect to the conventional method.

  14. Support vector machine based training of multilayer feedforward neural networks as optimized by particle swarm algorithm: application in QSAR studies of bioactivity of organic compounds.

    PubMed

    Lin, Wei-Qi; Jiang, Jian-Hui; Zhou, Yan-Ping; Wu, Hai-Long; Shen, Guo-Li; Yu, Ru-Qin

    2007-01-30

    Multilayer feedforward neural networks (MLFNNs) are important modeling techniques widely used in QSAR studies for their ability to represent nonlinear relationships between descriptors and activity. However, the problems of overfitting and premature convergence to local optima still pose great challenges in the practice of MLFNNs. To circumvent these problems, a support vector machine (SVM) based training algorithm for MLFNNs has been developed with the incorporation of particle swarm optimization (PSO). The introduction of the SVM based training mechanism imparts the developed algorithm with inherent capacity for combating the overfitting problem. Moreover, with the implementation of PSO for searching the optimal network weights, the SVM based learning algorithm shows relatively high efficiency in converging to the optima. The proposed algorithm has been evaluated using the Hansch data set. Application to QSAR studies of the activity of COX-2 inhibitors is also demonstrated. The results reveal that this technique provides superior performance to backpropagation (BP) and PSO training neural networks. PMID:17186488

  15. Meteorological Factors Related to Emergency Admission of Elderly Stroke Patients in Shanghai: Analysis with a Multilayer Perceptron Neural Network

    PubMed Central

    Meng, Guilin; Tan, Yan; Fang, Min; Yang, Hongyan; Liu, Xueyuan; Zhao, Yanxin

    2015-01-01

    Background The aim of this study was to predict the emergency admission of elderly stroke patients in Shanghai by using a multilayer perceptron (MLP) neural network. Material/Methods Patients (>60 years) with first-ever stroke registered in the Emergency Center of Neurology Department, Shanghai Tenth People’s Hospital, from January 2012 to June 2014 were enrolled into the present study. Daily climate records were obtained from the National Meteorological Office. MLP was used to model the daily emergency admission into the neurology department with meteorological factors such as wind level, weather type, daily maximum temperature, lowest temperature, average temperature, and absolute temperature difference. The relationships of meteorological factors with the emergency admission due to stroke were analyzed in an MLP model. Results In 886 days, 2180 first-onset elderly stroke patients were enrolled, and the average number of stroke patients was 2.46 per day. MLP was used to establish a model for the prediction of dates with low stroke admission (≤4) and those with high stroke admission (≥5). For the days with low stroke admission, the absolute temperature difference accounted for 40.7% of admissions, while for the days with high stroke admission, the weather types accounted for 73.3%. Conclusions Outdoor temperature and related meteorological parameters are associated with stroke attack. The absolute temperature difference and the weather types have adverse effects on stroke. Further study is needed to determine if other meteorological factors such as pollutants also play important roles in stroke attack. PMID:26590182

  16. Artificial neural network analysis of RBS data with roughness: Application to Ti 0.4Al 0.6N/Mo multilayers

    NASA Astrophysics Data System (ADS)

    Öhl, G.; Matias, V.; Vieira, A.; Barradas, N. P.

    2003-10-01

    In multilayered Ti 0.4Al 0.6N/Mo coatings, a strengthening effect can be obtained by using alternate layers of materials with high and low elastic constants. This behaviour requires a multilayer periodicity below a certain value in order to reduce dislocation motion across layer interface. Below this critical period, in most cases the hardness decreases as the period decreases. The multiple interfaces have an important role on this behaviour, working as stress relaxation areas and preventing crack propagation, influencing the mechanical properties of the system. Understanding the origin of these effects requires knowledge of the interface structure, where the interfacial roughness is of prime importance. We used Rutherford backscattering to study roughness in a quantitative way, and developed an artificial neural network algorithm dedicated to the analysis of the data. The results compare very well with previous TEM and AFM data.

  17. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the characteristics…

  18. Object reconstruction in multilayer neural network based profilometry using grating structure comprising two regions with different spatial periods

    NASA Astrophysics Data System (ADS)

    Ganotra, Dinesh; Joseph, Joby; Singh, Kehar

    2004-08-01

    Feed-forward backpropagation neural network has been used in fringe projection profilometry for reconstruction of a three-dimensional (3D) object. A grating structure comprising two regions of different spatial periods is projected on the reference surface over which the object is placed. The shorter spatial period part of the grating is projected over the object, whereas the longer spatial period part is projected on the reference surface only. 3D object shape is reconstructed with the help of neural networks using images of the projected grating. During training phase of the network, the shorter spatial period grating along with the longer spatial period grating is used. Experimental results are presented for a diffuse object, showing that the 3D shape of the object is recovered using the above-mentioned method. However, the phases wrapping takes place in Fourier transform profilometry by using only one grating of shorter spatial period.

  19. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  20. Nested Neural Networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1992-01-01

    Report presents analysis of nested neural networks, consisting of interconnected subnetworks. Analysis based on simplified mathematical models more appropriate for artificial electronic neural networks, partly applicable to biological neural networks. Nested structure allows for retrieval of individual subpatterns. Requires fewer wires and connection devices than fully connected networks, and allows for local reconstruction of damaged subnetworks without rewiring entire network.

  1. Optimization of metformin HCl 500 mg sustained release matrix tablets using Artificial Neural Network (ANN) based on Multilayer Perceptrons (MLP) model.

    PubMed

    Mandal, Uttam; Gowda, Veeran; Ghosh, Animesh; Bose, Anirbandeep; Bhaumik, Uttam; Chatterjee, Bappaditya; Pal, Tapan Kumar

    2008-02-01

    The aim of the present study was to apply the simultaneous optimization method incorporating Artificial Neural Network (ANN) using Multi-layer Perceptron (MLP) model to the development of a metformin HCl 500 mg sustained release matrix tablets with an optimized in vitro release profile. The amounts of HPMC K15M and PVP K30 at three levels (-1, 0, +1) for each were selected as casual factors. In vitro dissolution time profiles at four different sampling times (1 h, 2 h, 4 h and 8 h) were chosen as output variables. 13 kinds of metformin matrix tablets were prepared according to a 2(3) factorial design (central composite) with five extra center points, and their dissolution tests were performed. Commercially available STATISTICA Neural Network software (Stat Soft, Inc., Tulsa, OK, U.S.A.) was used throughout the study. The training process of MLP was completed until a satisfactory value of root square mean (RSM) for the test data was obtained using feed forward back propagation method. The root mean square value for the trained network was 0.000097, which indicated that the optimal MLP model was reached. The optimal tablet formulation based on some predetermined release criteria predicted by MLP was 336 mg of HPMC K15M and 130 mg of PVP K30. Calculated difference (f(1) 2.19) and similarity (f(2) 89.79) factors indicated that there was no difference between predicted and experimentally observed drug release profiles for the optimal formulation. This work illustrates the potential for an artificial neural network with MLP, to assist in development of sustained release dosage forms. PMID:18239298

  2. Modeling of gamma ray energy-absorption buildup factors for thermoluminescent dosimetric materials using multilayer perceptron neural network: A comparative study

    NASA Astrophysics Data System (ADS)

    Kucuk, Nil; Manohara, S. R.; Hanagodimath, S. M.; Gerward, L.

    2013-05-01

    In this work, multilayered perceptron neural networks (MLPNNs) were presented for the computation of the gamma-ray energy absorption buildup factors (BA) of seven thermoluminescent dosimetric (TLD) materials [LiF, BeO, Na2B4O7, CaSO4, Li2B4O7, KMgF3, Ca3(PO4)2] in the energy region 0.015-15 MeV, and for penetration depths up to 10 mfp (mean-free-path). The MLPNNs have been trained by a Levenberg-Marquardt learning algorithm. The developed model is in 99% agreement with the ANSI/ANS-6.4.3 standard data set. Furthermore, the model is fast and does not require tremendous computational efforts. The estimated BA data for TLD materials have been given with penetration depth and incident photon energy as comparative to the results of the interpolation method using the Geometrical Progression (G-P) fitting formula.

  3. Morphological neural networks

    SciTech Connect

    Ritter, G.X.; Sussner, P.

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  4. Application of design of experiments and multilayer perceptrons neural network in the optimization of diclofenac sodium extended release tablets with Carbopol 71G.

    PubMed

    Ivić, Branka; Ibrić, Svetlana; Cvetković, Nebojsa; Petrović, Aleksandra; Trajković, Svetlana; Djurić, Zorica

    2010-07-01

    The purpose of the study was to screen the effects of formulation factors on the in vitro release profile of diclofenac sodium from matrix tablets using design of experiment (DOE). Formulations of diclofenac sodium tablets, with Carbopol 71G as matrix substance, were optimized by artificial neural network. According to Central Composite Design, 10 formulations of diclofenac sodium matrix tablets were prepared. As network inputs, concentration of Carbopol 71G and the Kollidon K-25 were selected. In vitro dissolution time profiles at 5 different sampling times were chosen as responses. The independent variables and the release parameters were processed by multilayer perceptrons neural network (MLP). Results of drug release studies indicate that drug release rates vary between different formulations, with a range of 1 h to more than 8 h to complete dissolution. For two tested formulations there was no difference between experimental and MLP predicted in vitro profiles. The MLP model was optimized. The root mean square value for the trained network was 0.07%, which indicated that the optimal MLP model was reached. The optimal tablet formulation predicted by MLP was with 23% of Carbopol 71G and 0.8% of Kollidon K-25. Calculated difference factor (f(1) 7.37) and similarity factor (f(2) 70.79) indicate that there is no difference between predicted and experimentally observed drug release profiles for the optimal formulation. The satisfactory prediction of drug release for optimal formulation by the MLP in this study has shown the applicability of this optimization method in modeling extended release tablet formulation. PMID:20606343

  5. Structural reducibility of multilayer networks

    NASA Astrophysics Data System (ADS)

    de Domenico, Manlio; Nicosia, Vincenzo; Arenas, Alexandre; Latora, Vito

    2015-04-01

    Many complex systems can be represented as networks consisting of distinct types of interactions, which can be categorized as links belonging to different layers. For example, a good description of the full protein-protein interactome requires, for some organisms, up to seven distinct network layers, accounting for different genetic and physical interactions, each containing thousands of protein-protein relationships. A fundamental open question is then how many layers are indeed necessary to accurately represent the structure of a multilayered complex system. Here we introduce a method based on quantum theory to reduce the number of layers to a minimum while maximizing the distinguishability between the multilayer network and the corresponding aggregated graph. We validate our approach on synthetic benchmarks and we show that the number of informative layers in some real multilayer networks of protein-genetic interactions, social, economical and transportation systems can be reduced by up to 75%.

  6. [Multi-layer perceptron neural network based algorithm for simultaneous retrieving temperature and emissivity from hyperspectral FTIR data].

    PubMed

    Cheng, Jie; Xiao, Qing; Li, Xiao-Wen; Liu, Qin-Huo; Du, Yong-Ming

    2008-04-01

    The present paper firstly points out the defect of typical temperature and emissivity separation algorithms when dealing with hyperspectral FTIR data: the conventional temperature and emissivity algorithms can not reproduce correct emissivity value when the difference between the ground-leaving radiance and object's blackbody radiation at its true temperature and the instrument random noise are on the same order, and this phenomenon is very prone to occur rence near 714 and 1 250 cm(-1) in the field measurements. In order to settle this defect, a three-layer perceptron neural network has been introduced into the simultaneous inversion of temperature and emissivity from hyperspectral FTIR data. The soil emissivity spectra from the ASTER spectral library were used to produce the training data, the soil emissivity spectra from the MODIS spectral library were used to produce the test data, and the result of network test shows the MLP is robust. Meanwhile, the ISSTES algorithm was used to retrieve the temperature and emissivity form the test data. By comparing the results of MLP and ISSTES, we found the MLP can overcome the disadvantage of typical temperature and emisivity separation, although the rmse of derived emissivity using MLP is lower than the ISSTES as a whole. Hence, the MLP can be regarded as a beneficial complementarity of the typical temperature and emissivity separation. PMID:18619297

  7. Mathematical Formulation of Multilayer Networks

    NASA Astrophysics Data System (ADS)

    De Domenico, Manlio; Solé-Ribalta, Albert; Cozzo, Emanuele; Kivelä, Mikko; Moreno, Yamir; Porter, Mason A.; Gómez, Sergio; Arenas, Alex

    2013-10-01

    A network representation is useful for describing the structure of a large variety of complex systems. However, most real and engineered systems have multiple subsystems and layers of connectivity, and the data produced by such systems are very rich. Achieving a deep understanding of such systems necessitates generalizing “traditional” network theory, and the newfound deluge of data now makes it possible to test increasingly general frameworks for the study of networks. In particular, although adjacency matrices are useful to describe traditional single-layer networks, such a representation is insufficient for the analysis and description of multiplex and time-dependent networks. One must therefore develop a more general mathematical framework to cope with the challenges posed by multilayer complex systems. In this paper, we introduce a tensorial framework to study multilayer networks, and we discuss the generalization of several important network descriptors and dynamical processes—including degree centrality, clustering coefficients, eigenvector centrality, modularity, von Neumann entropy, and diffusion—for this framework. We examine the impact of different choices in constructing these generalizations, and we illustrate how to obtain known results for the special cases of single-layer and multiplex networks. Our tensorial approach will be helpful for tackling pressing problems in multilayer complex systems, such as inferring who is influencing whom (and by which media) in multichannel social networks and developing routing techniques for multimodal transportation systems.

  8. A consensual neural network

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Ersoy, O. K.; Swain, P. H.

    1991-01-01

    A neural network architecture called a consensual neural network (CNN) is proposed for the classification of data from multiple sources. Its relation to hierarchical and ensemble neural networks is discussed. CNN is based on the statistical consensus theory and uses nonlinearly transformed input data. The input data are transformed several times, and the different transformed data are applied as if they were independent inputs. The independent inputs are classified using stage neural networks and outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote-sensing data and geographic data are given.

  9. Nonlinear PLS modeling using neural networks

    SciTech Connect

    Qin, S.J.; McAvoy, T.J.

    1994-12-31

    This paper discusses the embedding of neural networks into the framework of the PLS (partial least squares) modeling method resulting in a neural net PLS modeling approach. By using the universal approximation property of neural networks, the PLS modeling method is genealized to a nonlinear framework. The resulting model uses neural networks to capture the nonlinearity and keeps the PLS projection to attain robust generalization property. In this paper, the standard PLS modeling method is briefly reviewed. Then a neural net PLS (NNPLS) modeling approach is proposed which incorporates feedforward networks into the PLS modeling. A multi-input-multi-output nonlinear modeling task is decomposed into linear outer relations and simple nonlinear inner relations which are performed by a number of single-input-single-output networks. Since only a small size network is trained at one time, the over-parametrized problem of the direct neural network approach is circumvented even when the training data are very sparse. A conjugate gradient learning method is employed to train the network. It is shown that, by analyzing the NNPLS algorithm, the global NNPLS model is equivalent to a multilayer feedforward network. Finally, applications of the proposed NNPLS method are presented with comparison to the standard linear PLS method and the direct neural network approach. The proposed neural net PLS method gives better prediction results than the PLS modeling method and the direct neural network approach.

  10. Exploring neural network technology

    SciTech Connect

    Naser, J.; Maulbetsch, J.

    1992-12-01

    EPRI is funding several projects to explore neural network technology, a form of artificial intelligence that some believe may mimic the way the human brain processes information. This research seeks to provide a better understanding of fundamental neural network characteristics and to identify promising utility industry applications. Results to date indicate that the unique attributes of neural networks could lead to improved monitoring, diagnostic, and control capabilities for a variety of complex utility operations. 2 figs.

  11. Advances in Artificial Neural Networks - Methodological Development and Application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  12. Interval neural networks

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

  13. Fuzzy neural network with fast backpropagation learning

    NASA Astrophysics Data System (ADS)

    Wang, Zhiling; De Sario, Marco; Guerriero, Andrea; Mugnuolo, Raffaele

    1995-03-01

    Neural filters with multilayer backpropagation network have been proved to be able to define mostly all linear or non-linear filters. Because of the slowness of the networks' convergency, however, the applicable fields have been limited. In this paper, fuzzy logic is introduced to adjust learning rate and momentum parameter depending upon output errors and training times. This makes the convergency of the network greatly improved. Test curves are shown to prove the fast filters' performance.

  14. Time series prediction using a rational fraction neural networks

    SciTech Connect

    Lee, K.; Lee, Y.C.; Barnes, C.; Aldrich, C.H.; Kindel, J.

    1988-01-01

    An efficient neural network based on a rational fraction representation has been trained to perform time series prediction. The network is a generalization of the Volterra-Wiener network while still retaining the computational efficiency of the latter. Because of the second order convergent nature of the learning algorithm, the rational net is computationally far more efficient than multilayer networks. The rational fractional representation is, however, more restrictive than the multilayer networks.

  15. Multilayer weighted social network model

    NASA Astrophysics Data System (ADS)

    Murase, Yohsuke; Török, János; Jo, Hang-Hyun; Kaski, Kimmo; Kertész, János

    2014-11-01

    Recent empirical studies using large-scale data sets have validated the Granovetter hypothesis on the structure of the society in that there are strongly wired communities connected by weak ties. However, as interaction between individuals takes place in diverse contexts, these communities turn out to be overlapping. This implies that the society has a multilayered structure, where the layers represent the different contexts. To model this structure we begin with a single-layer weighted social network (WSN) model showing the Granovetterian structure. We find that when merging such WSN models, a sufficient amount of interlayer correlation is needed to maintain the relationship between topology and link weights, while these correlations destroy the enhancement in the community overlap due to multiple layers. To resolve this, we devise a geographic multilayer WSN model, where the indirect interlayer correlations due to the geographic constraints of individuals enhance the overlaps between the communities and, at the same time, the Granovetterian structure is preserved.

  16. Evaluation of Süleymanköy (Diyarbakir, Eastern Turkey) and Seferihisar (Izmir, Western Turkey) Self Potential Anomalies with Multilayer Perceptron Neural Networks

    NASA Astrophysics Data System (ADS)

    Kaftan, Ilknur; Sindirgi, Petek

    2013-04-01

    Self-potential (SP) is one of the oldest geophysical methods that provides important information about near-surface structures. Several methods have been developed to interpret SP data using simple geometries. This study investigated inverse solution of a buried, polarized sphere-shaped self-potential (SP ) anomaly via Multilayer Perceptron Neural Networks ( MLPNN ). The polarization angle ( α ) and depth to the centre of sphere ( h )were estimated. The MLPNN is applied to synthetic and field SP data. In order to see the capability of the method in detecting the number of sources, MLPNN was applied to different spherical models at different depths and locations.. Additionally, the performance of MLPNN was tested by adding random noise to the same synthetic test data. The sphere model successfully obtained similar parameters under different S/N ratios. Then, MLPNN method was applied to two field examples. The first one is the cross section taken from the SP anomaly map of the Ergani-Süleymanköy (Turkey) copper mine. MLPNN was also applied to SP data from Seferihisar Izmir (Western Turkey) geothermal field. The MLPNN results showed good agreement with the original synthetic data set. The effect of The technique gave satisfactory results following the addition of 5% and 10% Gaussian noise levels. The MLPNN results were compared to other SP interpretation techniques, such as Normalized Full Gradient (NFG), inverse solution and nomogram methods. All of the techniques showed strong similarity. Consequently, the synthetic and field applications of this study show that MLPNN provides reliable evaluation of the self potential data modelled by the sphere model.

  17. Classification of radar clutter using neural networks.

    PubMed

    Haykin, S; Deng, C

    1991-01-01

    A classifier that incorporates both preprocessing and postprocessing procedures as well as a multilayer feedforward network (based on the back-propagation algorithm) in its design to distinguish between several major classes of radar returns including weather, birds, and aircraft is described. The classifier achieves an average classification accuracy of 89% on generalization for data collected during a single scan of the radar antenna. The procedures of feature selection for neural network training, the classifier design considerations, the learning algorithm development, the implementation, and the experimental results of the neural clutter classifier, which is simulated on a Warp systolic computer, are discussed. A comparative evaluation of the multilayer neural network with a traditional Bayes classifier is presented. PMID:18282874

  18. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  19. Critical Branching Neural Networks

    ERIC Educational Resources Information Center

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  20. Neural network applications

    NASA Technical Reports Server (NTRS)

    Padgett, Mary L.; Desai, Utpal; Roppel, T.A.; White, Charles R.

    1993-01-01

    A design procedure is suggested for neural networks which accommodates the inclusion of such knowledge-based systems techniques as fuzzy logic and pairwise comparisons. The use of these procedures in the design of applications combines qualitative and quantitative factors with empirical data to yield a model with justifiable design and parameter selection procedures. The procedure is especially relevant to areas of back-propagation neural network design which are highly responsive to the use of precisely recorded expert knowledge.

  1. Science of artificial neural networks; Proceedings of the Meeting, Orlando, FL, Apr. 21-24, 1992

    SciTech Connect

    Ruck, D.W.

    1992-01-01

    The present conference discusses high-order neural networks with adaptive architecture, a parallel cascaded one-step learning machine, stretch and hammer neural networks, visual grammars for neural networks, the net pruning of a multilayer perceptron, neural correlates of the sensorial and cognitive control of behavior, neural nets for massively parallel optimization, parametric and additive perturbations for global optimization, design rules for multilayer perceptrons, the negative transfer problem in neural networks, and a vision-based neural multimap pattern recognition architecture. Also discussed are function prediction with recurrent neural networks, fuzzy neural computing systems, edge detection via fuzzy neural networks, modeling confusion for autonomous systems, self-organization by fuzzy clustering, neural nets in information retrieval, neighborhoods and trajectories in Kohonen maps, the random structure of error surfaces, and conceptual recognition by neural networks.

  2. Coronary Artery Diagnosis Aided by Neural Network

    NASA Astrophysics Data System (ADS)

    Stefko, Kamil

    2007-01-01

    Coronary artery disease is due to atheromatous narrowing and subsequent occlusion of the coronary vessel. Application of optimised feed forward multi-layer back propagation neural network (MLBP) for detection of narrowing in coronary artery vessels is presented in this paper. The research was performed using 580 data records from traditional ECG exercise test confirmed by coronary arteriography results. Each record of training database included description of the state of a patient providing input data for the neural network. Level and slope of ST segment of a 12 lead ECG signal recorded at rest and after effort (48 floating point values) was the main component of input data for neural network was. Coronary arteriography results (verified the existence or absence of more than 50% stenosis of the particular coronary vessels) were used as a correct neural network training output pattern. More than 96% of cases were correctly recognised by especially optimised and a thoroughly verified neural network. Leave one out method was used for neural network verification so 580 data records could be used for training as well as for verification of neural network.

  3. Multilayer Kohonen network and its separability analysis

    NASA Astrophysics Data System (ADS)

    Liu, Chao-yuan; Li, Jie-Gu

    1995-04-01

    This paper presents a model of a multilayer Kohonen network. Because of obeying the winner- take-all learning rule and projecting high dimensional patterns into one or two dimensional space, the conventional Kohonen network has many limitations in its applications, such as pattern separability limitation and open ended limitation. Taking advantage of the innovation for learning method and its multilayer structure, the multilayer Kohonen network has the performance of nonlinear pattern partition. Owing to labeling pattern clusters with appropriate category names or numbers only, the network is an open ended system, so it is far more powerful than the conventional Kohonen network. The mechanism of the multilayer Kohonen network is explained in detail, and its nonlinear pattern separability is analyzed theoretically. As a result of an experiment made by two layer Kohonen network, a set of human head contour figures assigned into diverse by categories is shown.

  4. Failure behavior identification for a space antenna via neural networks

    NASA Technical Reports Server (NTRS)

    Sartori, Michael A.; Antsaklis, Panos J.

    1992-01-01

    By using neural networks, a method for the failure behavior identification of a space antenna model is investigated. The proposed method uses three stages. If a fault is suspected by the first stage of fault detection, a diagnostic test is performed on the antenna. The diagnostic test results are used by the second and third stages to identify which fault occurred and to diagnose the extent of the fault, respectively. The first stage uses a multilayer perceptron, the second stage uses a multilayer perceptron and neural networks trained with the quadratic optimization algorithm, a novel training procedure, and the third stage uses backpropagation trained neural networks.

  5. Neural network tomography: network replication from output surface geometry.

    PubMed

    Minnett, Rupert C J; Smith, Andrew T; Lennon, William C; Hecht-Nielsen, Robert

    2011-06-01

    Multilayer perceptron networks whose outputs consist of affine combinations of hidden units using the tanh activation function are universal function approximators and are used for regression, typically by reducing the MSE with backpropagation. We present a neural network weight learning algorithm that directly positions the hidden units within input space by numerically analyzing the curvature of the output surface. Our results show that under some sampling requirements, this method can reliably recover the parameters of a neural network used to generate a data set. PMID:21377326

  6. Hyperbolic Hopfield neural networks.

    PubMed

    Kobayashi, M

    2013-02-01

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states. PMID:24808287

  7. Nested neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    Nested neural networks, consisting of small interconnected subnetworks, allow for the storage and retrieval of neural state patterns of different sizes. The subnetworks are naturally categorized by layers of corresponding to spatial frequencies in the pattern field. The storage capacity and the error correction capability of the subnetworks generally increase with the degree of connectivity between layers (the nesting degree). Storage of only few subpatterns in each subnetworks results in a vast storage capacity of patterns and subpatterns in the nested network, maintaining high stability and error correction capability.

  8. Optical-Correlator Neural Network Based On Neocognitron

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  9. Neural networks: a biased overview

    SciTech Connect

    Domany, E.

    1988-06-01

    An overview of recent activity in the field of neural networks is presented. The long-range aim of this research is to understand how the brain works. First some of the problems are stated and terminology defined; then an attempt is made to explain why physicists are drawn to the field, and their main potential contribution. In particular, in recent years some interesting models have been introduced by physicists. A small subset of these models is described, with particular emphasis on those that are analytically soluble. Finally a brief review of the history and recent developments of single- and multilayer perceptrons is given, bringing the situation up to date regarding the central immediate problem of the field: search for a learning algorithm that has an associated convergence theorem.

  10. Neural Networks and Micromechanics

    NASA Astrophysics Data System (ADS)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  11. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  12. Program PSNN (Plasma Spectroscopy Neural Network)

    SciTech Connect

    Morgan, W.L.; Larsen, J.T.

    1993-08-01

    This program uses the standard ``delta rule`` back-propagation supervised training algorithm for multi-layer neural networks. The inputs are line intensities in arbitrary units, which are then normalized within the program. The outputs are T{sub e}(eV), N{sub e}(cm{sup {minus}3}), and a fractional ionization, which in our testing using H- and He-like spectra, was N(He)/[N(H) + N(He)].

  13. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  14. Web traffic prediction with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Gluszek, Adam; Kekez, Michal; Rudzinski, Filip

    2005-02-01

    The main aim of the paper is to present application of the artificial neural network in the web traffic prediction. First, the general problem of time series modelling and forecasting is shortly described. Next, the details of building of dynamic processes models with the neural networks are discussed. At this point determination of the model structure in terms of its inputs and outputs is the most important question because this structure is a rough approximation of the dynamics of the modelled process. The following section of the paper presents the results obtained applying artificial neural network (classical multilayer perceptron trained with backpropagation algorithm) to the real-world web traffic prediction. Finally, we discuss the results, describe weak points of presented method and propose some alternative approaches.

  15. Neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  16. Neural network construction via back-propagation

    SciTech Connect

    Burwick, T.T.

    1994-06-01

    A method is presented that combines back-propagation with multi-layer neural network construction. Back-propagation is used not only to adjust the weights but also the signal functions. Going from one network to an equivalent one that has additional linear units, the non-linearity of these units and thus their effective presence is then introduced via back-propagation (weight-splitting). The back-propagated error causes the network to include new units in order to minimize the error function. We also show how this formalism allows to escape local minima.

  17. Parallel processing neural networks

    SciTech Connect

    Zargham, M.

    1988-09-01

    A model for Neural Network which is based on a particular kind of Petri Net has been introduced. The model has been implemented in C and runs on the Sequent Balance 8000 multiprocessor, however it can be directly ported to different multiprocessor environments. The potential advantages of using Petri Nets include: (1) the overall system is often easier to understand due to the graphical and precise nature of the representation scheme, (2) the behavior of the system can be analyzed using Petri Net theory. Though, the Petri Net is an obvious choice as a basis for the model, the basic Petri Net definition is not adequate to represent the neuronal system. To eliminate certain inadequacies more information has been added to the Petri Net model. In the model, a token represents either a processor or a post synaptic potential. Progress through a particular Neural Network is thus graphically depicted in the movement of the processor tokens through the Petri Net.

  18. An introduction to neural networks: A tutorial

    SciTech Connect

    Walker, J.L.; Hill, E.V.K.

    1994-12-31

    Neural networks are a powerful set of mathematical techniques used for solving linear and nonlinear classification and prediction (function approximation) problems. Inspired by studies of the brain, these series and parallel combinations of simple functional units called artificial neurons have the ability to learn or be trained to solve very complex problems. Fundamental aspects of artificial neurons are discussed, including their activation functions, their combination into multilayer feedforward networks with hidden layers, and the use of bias neurons to reduce training time. The back propagation (of errors) paradigm for supervised training of feedforward networks is explained. Then, the architecture and mathematics of a Kohonen self organizing map for unsupervised learning are discussed. Two example problems are given. The first is for the application of a back propagation neural network to learn the correct response to an input vector using supervised training. The second is a classification problem using a self organizing map and unsupervised training.

  19. Neural networks for triggering

    SciTech Connect

    Denby, B. ); Campbell, M. ); Bedeschi, F. ); Chriss, N.; Bowers, C. ); Nesti, F. )

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  20. Uniformly sparse neural networks

    NASA Astrophysics Data System (ADS)

    Haghighi, Siamack

    1992-07-01

    Application of neural networks to problems with a large number of sensory inputs is severely limited when the processing elements (PEs) need to be fully connected. This paper presents a new network model in which a trade off between the number of connections to a node and the number of processing layers can be made. This trade off is an important issue in the VLSI implementation of neural networks. The performance and capability of a hierarchical pyramidal network architecture of limited fan-in PE layers is analyzed. Analysis of this architecture requires the development of a new learning rule, since each PE has access to limited information about the entire network input. A spatially local unsupervised training rule is developed in which each PE optimizes the fraction of its output variance contributed by input correlations, resulting in PEs behaving as adaptive local correlation detectors. It is also shown that the output of a PE optimally represents the mutual information among the inputs to that PE. Applications of the developed model in image compression and motion detection are presented.

  1. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  2. Training Feedforward Neural Networks: An Algorithm Giving Improved Generalization.

    PubMed

    Lee, Charles W.

    1997-01-01

    An algorithm is derived for supervised training in multilayer feedforward neural networks. Relative to the gradient descent backpropagation algorithm it appears to give both faster convergence and improved generalization, whilst preserving the system of backpropagating errors through the network. Copyright 1996 Elsevier Science Ltd. PMID:12662887

  3. Landslide susceptibility assesssment in the Uttarakhand area (India) using GIS: a comparison study of prediction capability of naïve bayes, multilayer perceptron neural networks, and functional trees methods

    NASA Astrophysics Data System (ADS)

    Pham, Binh Thai; Tien Bui, Dieu; Pourghasemi, Hamid Reza; Indra, Prakash; Dholakia, M. B.

    2015-12-01

    The objective of this study is to make a comparison of the prediction performance of three techniques, Functional Trees (FT), Multilayer Perceptron Neural Networks (MLP Neural Nets), and Naïve Bayes (NB) for landslide susceptibility assessment at the Uttarakhand Area (India). Firstly, a landslide inventory map with 430 landslide locations in the study area was constructed from various sources. Landslide locations were then randomly split into two parts (i) 70 % landslide locations being used for training models (ii) 30 % landslide locations being employed for validation process. Secondly, a total of eleven landslide conditioning factors including slope angle, slope aspect, elevation, curvature, lithology, soil, land cover, distance to roads, distance to lineaments, distance to rivers, and rainfall were used in the analysis to elucidate the spatial relationship between these factors and landslide occurrences. Feature selection of Linear Support Vector Machine (LSVM) algorithm was employed to assess the prediction capability of these conditioning factors on landslide models. Subsequently, the NB, MLP Neural Nets, and FT models were constructed using training dataset. Finally, success rate and predictive rate curves were employed to validate and compare the predictive capability of three used models. Overall, all the three models performed very well for landslide susceptibility assessment. Out of these models, the MLP Neural Nets and the FT models had almost the same predictive capability whereas the MLP Neural Nets (AUC = 0.850) was slightly better than the FT model (AUC = 0.849). The NB model (AUC = 0.838) had the lowest predictive capability compared to other models. Landslide susceptibility maps were final developed using these three models. These maps would be helpful to planners and engineers for the development activities and land-use planning.

  4. Neural networks for self-learning control systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Derrick H.; Widrow, Bernard

    1990-01-01

    It is shown how a neural network can learn of its own accord to control a nonlinear dynamic system. An emulator, a multilayered neural network, learns to identify the system's dynamic characteristics. The controller, another multilayered neural network, next learns to control the emulator. The self-trained controller is then used to control the actual dynamic system. The learning process continues as the emulator and controller improve and track the physical process. An example is given to illustrate these ideas. The 'truck backer-upper,' a neural network controller that steers a trailer truck while the truck is backing up to a loading dock, is demonstrated. The controller is able to guide the truck to the dock from almost any initial position. The technique explored should be applicable to a wide variety of nonlinear control problems.

  5. Forecasting PM10 in Algiers: efficacy of multilayer perceptron networks.

    PubMed

    Abderrahim, Hamza; Chellali, Mohammed Reda; Hamou, Ahmed

    2016-01-01

    Air quality forecasting system has acquired high importance in atmospheric pollution due to its negative impacts on the environment and human health. The artificial neural network is one of the most common soft computing methods that can be pragmatic for carving such complex problem. In this paper, we used a multilayer perceptron neural network to forecast the daily averaged concentration of the respirable suspended particulates with aerodynamic diameter of not more than 10 μm (PM10) in Algiers, Algeria. The data for training and testing the network are based on the data sampled from 2002 to 2006 collected by SAMASAFIA network center at El Hamma station. The meteorological data, air temperature, relative humidity, and wind speed, are used as inputs network parameters in the formation of model. The training patterns used correspond to 41 days data. The performance of the developed models was evaluated on the basis index of agreement and other statistical parameters. It was seen that the overall performance of model with 15 neurons is better than the ones with 5 and 10 neurons. The results of multilayer network with as few as one hidden layer and 15 neurons were quite reasonable than the ones with 5 and 10 neurons. Finally, an error around 9% has been reached. PMID:26381787

  6. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  7. Neural network method for characterizing video cameras

    NASA Astrophysics Data System (ADS)

    Zhou, Shuangquan; Zhao, Dazun

    1998-08-01

    This paper presents a neural network method for characterizing color video camera. A multilayer feedforward network with the error back-propagation learning rule for training, is used as a nonlinear transformer to model a camera, which realizes a mapping from the CIELAB color space to RGB color space. With SONY video camera, D65 illuminant, Pritchard Spectroradiometer, 410 JIS color charts as training data and 36 charts as testing data, results show that the mean error of training data is 2.9 and that of testing data is 4.0 in a 2563 RGB space.

  8. AUTOMATED DEFECT CLASSIFICATION USING AN ARTIFICIAL NEURAL NETWORK

    SciTech Connect

    Chady, T.; Caryk, M.; Piekarczyk, B.

    2009-03-03

    The automated defect classification algorithm based on artificial neural network with multilayer backpropagation structure was utilized. The selected features of flaws were used as input data. In order to train the neural network it is necessary to prepare learning data which is representative database of defects. Database preparation requires the following steps: image acquisition and pre-processing, image enhancement, defect detection and feature extraction. The real digital radiographs of welded parts of a ship were used for this purpose.

  9. Automated Defect Classification Using AN Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Chady, T.; Caryk, M.; Piekarczyk, B.

    2009-03-01

    The automated defect classification algorithm based on artificial neural network with multilayer backpropagation structure was utilized. The selected features of flaws were used as input data. In order to train the neural network it is necessary to prepare learning data which is representative database of defects. Database preparation requires the following steps: image acquisition and pre-processing, image enhancement, defect detection and feature extraction. The real digital radiographs of welded parts of a ship were used for this purpose.

  10. Classification of Magneto-Optic Images using Neural Networks

    NASA Technical Reports Server (NTRS)

    Nath, Shridhar; Wincheski, Buzz; Fulton, Jim; Namkung, Min

    1994-01-01

    A real time imaging system with a neural network classifier has been incorporated on a Macintosh computer in conjunction with an MOI system. This system images rivets on aircraft aluminium structures using eddy currents and magnetic imaging. Moment invariant functions from the image of a rivet is used to train a multilayer perceptron neural network to classify the rivets as good or bad (rivets with cracks).

  11. Space-Time Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  12. Application of adaptive boosting to EP-derived multilayer feed-forward neural networks (MLFN) to improve benign/malignant breast cancer classification

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Masters, Timothy D.; Lo, Joseph Y.; McKee, Dan

    2001-07-01

    A new neural network technology was developed for improving the benign/malignant diagnosis of breast cancer using mammogram findings. A new paradigm, Adaptive Boosting (AB), uses a markedly different theory in solutioning Computational Intelligence (CI) problems. AB, a new machine learning paradigm, focuses on finding weak learning algorithm(s) that initially need to provide slightly better than random performance (i.e., approximately 55%) when processing a mammogram training set. Then, by successive development of additional architectures (using the mammogram training set), the adaptive boosting process improves the performance of the basic Evolutionary Programming derived neural network architectures. The results of these several EP-derived hybrid architectures are then intelligently combined and tested using a similar validation mammogram data set. Optimization focused on improving specificity and positive predictive value at very high sensitivities, where an analysis of the performance of the hybrid would be most meaningful. Using the DUKE mammogram database of 500 biopsy proven samples, on average this hybrid was able to achieve (under statistical 5-fold cross-validation) a specificity of 48.3% and a positive predictive value (PPV) of 51.8% while maintaining 100% sensitivity. At 97% sensitivity, a specificity of 56.6% and a PPV of 55.8% were obtained.

  13. Optimization of a multilayer neural network by using minimal redundancy maximal relevance-partial mutual information clustering with least square regression.

    PubMed

    Chen, Chao; Yan, Xuefeng

    2015-06-01

    In this paper, an optimized multilayer feed-forward network (MLFN) is developed to construct a soft sensor for controlling naphtha dry point. To overcome the two main flaws in the structure and weight of MLFNs, which are trained by a back-propagation learning algorithm, minimal redundancy maximal relevance-partial mutual information clustering (mPMIc) integrated with least square regression (LSR) is proposed to optimize the MLFN. The mPMIc can determine the location of hidden layer nodes using information in the hidden and output layers, as well as remove redundant hidden layer nodes. These selected nodes are highly related to output data, but are minimally correlated with other hidden layer nodes. The weights between the selected hidden layer nodes and output layer are then updated through LSR. When the redundant nodes from the hidden layer are removed, the ideal MLFN structure can be obtained according to the test error results. In actual applications, the naphtha dry point must be controlled accurately because it strongly affects the production yield and the stability of subsequent operational processes. The mPMIc-LSR MLFN with a simple network size performs better than other improved MLFN variants and existing efficient models. PMID:25055386

  14. a Heterosynaptic Learning Rule for Neural Networks

    NASA Astrophysics Data System (ADS)

    Emmert-Streib, Frank

    In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.

  15. Evolutionary games on multilayer networks: a colloquium

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Wang, Lin; Szolnoki, Attila; Perc, Matjaž

    2015-05-01

    Networks form the backbone of many complex systems, ranging from the Internet to human societies. Accordingly, not only is the range of our interactions limited and thus best described and modeled by networks, it is also a fact that the networks that are an integral part of such models are often interdependent or even interconnected. Networks of networks or multilayer networks are therefore a more apt description of social systems. This colloquium is devoted to evolutionary games on multilayer networks, and in particular to the evolution of cooperation as one of the main pillars of modern human societies. We first give an overview of the most significant conceptual differences between single-layer and multilayer networks, and we provide basic definitions and a classification of the most commonly used terms. Subsequently, we review fascinating and counterintuitive evolutionary outcomes that emerge due to different types of interdependencies between otherwise independent populations. The focus is on coupling through the utilities of players, through the flow of information, as well as through the popularity of different strategies on different network layers. The colloquium highlights the importance of pattern formation and collective behavior for the promotion of cooperation under adverse conditions, as well as the synergies between network science and evolutionary game theory.

  16. Model neural networks

    SciTech Connect

    Kepler, T.B.

    1989-01-01

    After a brief introduction to the techniques and philosophy of neural network modeling by spin glass inspired system, the author investigates several properties of these discrete models for autoassociative memory. Memories are represented as patterns of neural activity; their traces are stored in a distributed manner in the matrix of synaptic coupling strengths. Recall is dynamic, an initial state containing partial information about one of the memories evolves toward that memory. Activity in each neuron creates fields at every other neuron, the sum total of which determines its activity. By averaging over the space of interaction matrices with memory constraints enforced by the choice of measure, we show that the exist universality classes defined by families of field distributions and the associated network capacities. He demonstrates the dominant role played by the field distribution in determining the size of the domains of attraction and present, in two independent ways, an expression for this size. He presents a class of convergent learning algorithms which improve upon known algorithms for producing such interaction matrices. He demonstrates that spurious states, or unexperienced memories, may be practically suppressed by the inducement of n-cycles and chaos. He investigates aspects of chaos in these systems, and then leave discrete modeling to implement the analysis of chaotic behavior on a continuous valued network realized in electronic hardware. In each section he combine analytical calculation and computer simulations.

  17. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  18. A multi-layer network approach to MEG connectivity analysis

    PubMed Central

    Brookes, Matthew J.; Tewarie, Prejaas K.; Hunt, Benjamin A.E.; Robson, Sian E.; Gascoyne, Lauren E.; Liddle, Elizabeth B.; Liddle, Peter F.; Morris, Peter G.

    2016-01-01

    Recent years have shown the critical importance of inter-regional neural network connectivity in supporting healthy brain function. Such connectivity is measurable using neuroimaging techniques such as MEG, however the richness of the electrophysiological signal makes gaining a complete picture challenging. Specifically, connectivity can be calculated as statistical interdependencies between neural oscillations within a large range of different frequency bands. Further, connectivity can be computed between frequency bands. This pan-spectral network hierarchy likely helps to mediate simultaneous formation of multiple brain networks, which support ongoing task demand. However, to date it has been largely overlooked, with many electrophysiological functional connectivity studies treating individual frequency bands in isolation. Here, we combine oscillatory envelope based functional connectivity metrics with a multi-layer network framework in order to derive a more complete picture of connectivity within and between frequencies. We test this methodology using MEG data recorded during a visuomotor task, highlighting simultaneous and transient formation of motor networks in the beta band, visual networks in the gamma band and a beta to gamma interaction. Having tested our method, we use it to demonstrate differences in occipital alpha band connectivity in patients with schizophrenia compared to healthy controls. We further show that these connectivity differences are predictive of the severity of persistent symptoms of the disease, highlighting their clinical relevance. Our findings demonstrate the unique potential of MEG to characterise neural network formation and dissolution. Further, we add weight to the argument that dysconnectivity is a core feature of the neuropathology underlying schizophrenia. PMID:26908313

  19. A multi-layer network approach to MEG connectivity analysis.

    PubMed

    Brookes, Matthew J; Tewarie, Prejaas K; Hunt, Benjamin A E; Robson, Sian E; Gascoyne, Lauren E; Liddle, Elizabeth B; Liddle, Peter F; Morris, Peter G

    2016-05-15

    Recent years have shown the critical importance of inter-regional neural network connectivity in supporting healthy brain function. Such connectivity is measurable using neuroimaging techniques such as MEG, however the richness of the electrophysiological signal makes gaining a complete picture challenging. Specifically, connectivity can be calculated as statistical interdependencies between neural oscillations within a large range of different frequency bands. Further, connectivity can be computed between frequency bands. This pan-spectral network hierarchy likely helps to mediate simultaneous formation of multiple brain networks, which support ongoing task demand. However, to date it has been largely overlooked, with many electrophysiological functional connectivity studies treating individual frequency bands in isolation. Here, we combine oscillatory envelope based functional connectivity metrics with a multi-layer network framework in order to derive a more complete picture of connectivity within and between frequencies. We test this methodology using MEG data recorded during a visuomotor task, highlighting simultaneous and transient formation of motor networks in the beta band, visual networks in the gamma band and a beta to gamma interaction. Having tested our method, we use it to demonstrate differences in occipital alpha band connectivity in patients with schizophrenia compared to healthy controls. We further show that these connectivity differences are predictive of the severity of persistent symptoms of the disease, highlighting their clinical relevance. Our findings demonstrate the unique potential of MEG to characterise neural network formation and dissolution. Further, we add weight to the argument that dysconnectivity is a core feature of the neuropathology underlying schizophrenia. PMID:26908313

  20. Inverse kinematics problem in robotics using neural networks

    NASA Technical Reports Server (NTRS)

    Choi, Benjamin B.; Lawrence, Charles

    1992-01-01

    In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.

  1. The structure and dynamics of multilayer networks

    NASA Astrophysics Data System (ADS)

    Boccaletti, S.; Bianconi, G.; Criado, R.; del Genio, C. I.; Gómez-Gardeñes, J.; Romance, M.; Sendiña-Nadal, I.; Wang, Z.; Zanin, M.

    2014-11-01

    In the past years, network theory has successfully characterized the interaction among the constituents of a variety of complex systems, ranging from biological to technological, and social systems. However, up until recently, attention was almost exclusively given to networks in which all components were treated on equivalent footing, while neglecting all the extra information about the temporal- or context-related properties of the interactions under study. Only in the last years, taking advantage of the enhanced resolution in real data sets, network scientists have directed their interest to the multiplex character of real-world systems, and explicitly considered the time-varying and multilayer nature of networks. We offer here a comprehensive review on both structural and dynamical organization of graphs made of diverse relationships (layers) between its constituents, and cover several relevant issues, from a full redefinition of the basic structural measures, to understanding how the multilayer nature of the network affects processes and dynamics.

  2. Interacting neural networks.

    PubMed

    Metzler, R; Kinzel, W; Kanter, I

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random. PMID:11088736

  3. Dynamic interactions in neural networks

    SciTech Connect

    Arbib, M.A. ); Amari, S. )

    1989-01-01

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  4. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  5. Neural Networks for the Beginner.

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  6. Privacy-preserving backpropagation neural network learning.

    PubMed

    Chen, Tingting; Zhong, Sheng

    2009-10-01

    With the development of distributed computing environment , many learning problems now have to deal with distributed input data. To enhance cooperations in learning, it is important to address the privacy concern of each data holder by extending the privacy preservation notion to original learning algorithms. In this paper, we focus on preserving the privacy in an important learning model, multilayer neural networks. We present a privacy-preserving two-party distributed algorithm of backpropagation which allows a neural network to be trained without requiring either party to reveal her data to the other. We provide complete correctness and security analysis of our algorithms. The effectiveness of our algorithms is verified by experiments on various real world data sets. PMID:19709975

  7. Random walk centrality in interconnected multilayer networks

    NASA Astrophysics Data System (ADS)

    Solé-Ribalta, Albert; De Domenico, Manlio; Gómez, Sergio; Arenas, Alex

    2016-06-01

    Real-world complex systems exhibit multiple levels of relationships. In many cases they require to be modeled as interconnected multilayer networks, characterizing interactions of several types simultaneously. It is of crucial importance in many fields, from economics to biology and from urban planning to social sciences, to identify the most (or the less) influent nodes in a network using centrality measures. However, defining the centrality of actors in interconnected complex networks is not trivial. In this paper, we rely on the tensorial formalism recently proposed to characterize and investigate this kind of complex topologies, and extend two well known random walk centrality measures, the random walk betweenness and closeness centrality, to interconnected multilayer networks. For each of the measures we provide analytical expressions that completely agree with numerically results.

  8. Neural Network Development Tool (NETS)

    NASA Technical Reports Server (NTRS)

    Baffes, Paul T.

    1990-01-01

    Artificial neural networks formed from hundreds or thousands of simulated neurons, connected in manner similar to that in human brain. Such network models learning behavior. Using NETS involves translating problem to be solved into input/output pairs, designing network configuration, and training network. Written in C.

  9. Color control of printers by neural networks

    NASA Astrophysics Data System (ADS)

    Tominaga, Shoji

    1998-07-01

    A method is proposed for solving the mapping problem from the 3D color space to the 4D CMYK space of printer ink signals by means of a neural network. The CIE-L*a*b* color system is used as the device-independent color space. The color reproduction problem is considered as the problem of controlling an unknown static system with four inputs and three outputs. A controller determines the CMYK signals necessary to produce the desired L*a*b* values with a given printer. Our solution method for this control problem is based on a two-phase procedure which eliminates the need for UCR and GCR. The first phase determines a neural network as a model of the given printer, and the second phase determines the combined neural network system by combining the printer model and the controller in such a way that it represents an identity mapping in the L*a*b* color space. Then the network of the controller part realizes the mapping from the L*a*b* space to the CMYK space. Practical algorithms are presented in the form of multilayer feedforward networks. The feasibility of the proposed method is shown in experiments using a dye sublimation printer and an ink jet printer.

  10. Neural network classifier of attacks in IP telephony

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  11. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  12. Deinterlacing using modular neural network

    NASA Astrophysics Data System (ADS)

    Woo, Dong H.; Eom, Il K.; Kim, Yoo S.

    2004-05-01

    Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.

  13. Computational capabilities of recurrent NARX neural networks.

    PubMed

    Siegelmann, H T; Horne, B G; Giles, C L

    1997-01-01

    Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=Psi(u(t-n(u)), ..., u(t-1), u(t), y(t-n(y)), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n(u) and n(y) are the input and output order, and the function Psi is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power. PMID:18255858

  14. Auto-clustering of mugshots using multilayer Kohonen network

    NASA Astrophysics Data System (ADS)

    Liu, Chao-yuan; Li, Jie-Gu

    1995-03-01

    This paper proposes a multi-layer neural network system to classify police mugshots according to the contours of the heads. In order to efficiently acquire enough information from the mugshots, an interactive algorithm performing image pre-processing including segmentation and curve fitting is presented, by which the contours of the human heads are extracted. From the contours obtained, a set of feature vectors consisting of 16 normalized measures is gathered. Since the feature vectors are distributed non-linearly separable in Hilbert space, a two layer Kohonen network is implemented to cluster these vectors. It has been demonstrated and proved that the multi-layer Kohonen network has a performance of non-linear partition, so it has more powerful pattern separability than conventional Kohonen network. Meanwhile, the fact that two layer Kohonen network is enough for dealing with the current non-linear partition problem is expressed. About 100 samples of mugshots are involved in the research, and the results are given.

  15. Correcting wave predictions with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Makarynskyy, O.; Makarynska, D.

    2003-04-01

    The predictions of wind waves with different lead times are necessary in a large scope of coastal and open ocean activities. Numerical wave models, which usually provide this information, are based on deterministic equations that do not entirely account for the complexity and uncertainty of the wave generation and dissipation processes. An attempt to improve wave parameters short-term forecasts based on artificial neural networks is reported. In recent years, artificial neural networks have been used in a number of coastal engineering applications due to their ability to approximate the nonlinear mathematical behavior without a priori knowledge of interrelations among the elements within a system. The common multilayer feed-forward networks, with a nonlinear transfer functions in the hidden layers, were developed and employed to forecast the wave characteristics over one hour intervals starting from one up to 24 hours, and to correct these predictions. Three non-overlapping data sets of wave characteristics, both from a buoy, moored roughly 60 miles west of the Aran Islands, west coast of Ireland, were used to train and validate the neural nets involved. The networks were trained with error back propagation algorithm. Time series plots and scatterplots of the wave characteristics as well as tables with statistics show an improvement of the results achieved due to the correction procedure employed.

  16. Inversion of surface parameters using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Olvera, J.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A neural network approach to the inversion of surface scattering parameters is presented. Simulated data sets based on a surface scattering model are used so that the data may be viewed as taken from a completely known randomly rough surface. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) are tested on the simulated backscattering data. The RMS error of training the FL network is found to be less than one half the error of the BP network while requiring one to two orders of magnitude less CPU time. When applied to inversion of parameters from a statistically rough surface, the FL method is successful at recovering the surface permittivity, the surface correlation length, and the RMS surface height in less time and with less error than the BP network. Further applications of the FL neural network to the inversion of parameters from backscatter measurements of an inhomogeneous layer above a half space are shown.

  17. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  18. Neural Networks for Readability Analysis.

    ERIC Educational Resources Information Center

    McEneaney, John E.

    This paper describes and reports on the performance of six related artificial neural networks that have been developed for the purpose of readability analysis. Two networks employ counts of linguistic variables that simulate a traditional regression-based approach to readability. The remaining networks determine readability from "visual snapshots"…

  19. Estimation of bullet striation similarity using neural networks.

    PubMed

    Banno, Atsuhiko

    2004-05-01

    A new method that searches for similar striation patterns using neural networks is described. Neural networks have been developed based on the human brain, which is good at pattern recognition. Therefore, neural networks would be expected to be effective in identifying striated toolmarks on bullets. The neural networks used in this study deal with binary signals derived from striation images. This signal plays a significant role in identification, because this signal is the key to the individually of the striations. The neural network searches a database for similar striations by means of these binary signals. The neural network used here is a multilayer network consisting of 96 neurons in the input layer, 15 neurons in the middle, and one neuron in the output layer. Two signals are inputted into the network and a score is estimated based on the similarity of these signals. For this purpose, the network is assigned to a previous learning. To initially test the validity of the procedure, the network identifies artificial patterns that are randomly produced on a personal computer. The results were acceptable and showed robustness for the deformation of patterns. Moreover, with ten unidentified bullets and ten database bullets, the network consistently was able to select the correct pair. PMID:15171166

  20. Multilayer network decoding versatility and trust

    NASA Astrophysics Data System (ADS)

    Sarkar, Camellia; Yadav, Alok; Jalan, Sarika

    2016-01-01

    In the recent years, the multilayer networks have increasingly been realized as a more realistic framework to understand emergent physical phenomena in complex real-world systems. We analyze massive time-varying social data drawn from the largest film industry of the world under a multilayer network framework. The framework enables us to evaluate the versatility of actors, which turns out to be an intrinsic property of lead actors. Versatility in dimers suggests that working with different types of nodes are more beneficial than with similar ones. However, the triangles yield a different relation between type of co-actor and the success of lead nodes indicating the importance of higher-order motifs in understanding the properties of the underlying system. Furthermore, despite the degree-degree correlations of entire networks being neutral, multilayering picks up different values of correlation indicating positive connotations like trust, in the recent years. The analysis of weak ties of the industry uncovers nodes from a lower-degree regime being important in linking Bollywood clusters. The framework and the tools used herein may be used for unraveling the complexity of other real-world systems.

  1. Neural Networks Of VLSI Components

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1991-01-01

    Concept for design of electronic neural network calls for assembly of very-large-scale integrated (VLSI) circuits of few standard types. Each VLSI chip, which contains both analog and digital circuitry, used in modular or "building-block" fashion by interconnecting it in any of variety of ways with other chips. Feedforward neural network in typical situation operates under control of host computer and receives inputs from, and sends outputs to, other equipment.

  2. Correlational Neural Networks.

    PubMed

    Chandar, Sarath; Khapra, Mitesh M; Larochelle, Hugo; Ravindran, Balaraman

    2016-02-01

    Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches. PMID:26654210

  3. Sea ice classification using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.

  4. Neural-Network-Development Program

    NASA Technical Reports Server (NTRS)

    Phillips, Todd A.

    1993-01-01

    NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.

  5. Financial Time Series Prediction Using Spiking Neural Networks

    PubMed Central

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  6. Financial time series prediction using spiking neural networks.

    PubMed

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  7. Shale Gas reservoirs characterization using neural network

    NASA Astrophysics Data System (ADS)

    Ouadfeul, Sid-Ali; Aliouane, Leila

    2014-05-01

    In this paper, a tentative of shale gas reservoirs characterization enhancement from well-logs data using neural network is established. The goal is to predict the Total Organic carbon (TOC) in boreholes where the TOC core rock or TOC well-log measurement does not exist. The Multilayer perceptron (MLP) neural network with three layers is established. The MLP input layer is constituted with five neurons corresponding to the Bulk density, Neutron porosity, sonic P wave slowness and photoelectric absorption coefficient. The hidden layer is forms with nine neurons and the output layer is formed with one neuron corresponding to the TOC log. Application to two boreholes located in Barnett shale formation where a well A is used as a pilot and a well B is used for propagation shows clearly the efficiency of the neural network method to improve the shale gas reservoirs characterization. The established formalism plays a high important role in the shale gas plays economy and long term gas energy production.

  8. File access prediction using neural networks.

    PubMed

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors. PMID:20421183

  9. Measure of Node Similarity in Multilayer Networks

    PubMed Central

    Mollgaard, Anders; Zettler, Ingo; Dammeyer, Jesper; Jensen, Mogens H.; Lehmann, Sune; Mathiesen, Joachim

    2016-01-01

    The weight of links in a network is often related to the similarity of the nodes. Here, we introduce a simple tunable measure for analysing the similarity of nodes across different link weights. In particular, we use the measure to analyze homophily in a group of 659 freshman students at a large university. Our analysis is based on data obtained using smartphones equipped with custom data collection software, complemented by questionnaire-based data. The network of social contacts is represented as a weighted multilayer network constructed from different channels of telecommunication as well as data on face-to-face contacts. We find that even strongly connected individuals are not more similar with respect to basic personality traits than randomly chosen pairs of individuals. In contrast, several socio-demographics variables have a significant degree of similarity. We further observe that similarity might be present in one layer of the multilayer network and simultaneously be absent in the other layers. For a variable such as gender, our measure reveals a transition from similarity between nodes connected with links of relatively low weight to dis-similarity for the nodes connected by the strongest links. We finally analyze the overlap between layers in the network for different levels of acquaintanceships. PMID:27300084

  10. Measure of Node Similarity in Multilayer Networks.

    PubMed

    Mollgaard, Anders; Zettler, Ingo; Dammeyer, Jesper; Jensen, Mogens H; Lehmann, Sune; Mathiesen, Joachim

    2016-01-01

    The weight of links in a network is often related to the similarity of the nodes. Here, we introduce a simple tunable measure for analysing the similarity of nodes across different link weights. In particular, we use the measure to analyze homophily in a group of 659 freshman students at a large university. Our analysis is based on data obtained using smartphones equipped with custom data collection software, complemented by questionnaire-based data. The network of social contacts is represented as a weighted multilayer network constructed from different channels of telecommunication as well as data on face-to-face contacts. We find that even strongly connected individuals are not more similar with respect to basic personality traits than randomly chosen pairs of individuals. In contrast, several socio-demographics variables have a significant degree of similarity. We further observe that similarity might be present in one layer of the multilayer network and simultaneously be absent in the other layers. For a variable such as gender, our measure reveals a transition from similarity between nodes connected with links of relatively low weight to dis-similarity for the nodes connected by the strongest links. We finally analyze the overlap between layers in the network for different levels of acquaintanceships. PMID:27300084

  11. The use of artificial neural networks for residential buildings conceptual cost estimation

    NASA Astrophysics Data System (ADS)

    Juszczyk, Michał

    2013-10-01

    Accurate cost estimation in the early phase of the building's design process is of key importance for a project's success. Both underestimation and overestimation may lead to projects failure in terms of costs. The paper presents synthetically some research results on the use of neural networks for conceptual cost estimation of residential buildings. In the course of the research the author focused on regression models binding together the basic information about residential buildings available in the early stage of design and construction cost. Application of different neural networks types was analysed (multilayer perceptron, multilayer perceptron with data compression based on principal component analysis and radial basis function networks). Due to the research results, multilayer perceptron networks proved to be the best neural network type for the problem solution. The research results indicate that a neural approach may be an interesting alternative for the traditional methods of conceptual cost estimation in construction projects.

  12. Neural network models for a resource allocation problem.

    PubMed

    Walczak, S

    1998-01-01

    University admissions and business personnel offices use a limited number of resources to process an ever-increasing quantity of student and employment applications. Application systems are further constrained to identify and acquire, in a limited time period, those candidates who are most likely to accept an offer of enrolment or employment. Neural networks are a new methodology to this particular domain. Various neural network architectures and learning algorithms are analyzed comparatively to determine the applicability of supervised learning neural networks to the domain problem of personnel resource allocation and to identify optimal learning strategies in this domain. This paper focuses on multilayer perceptron backpropagation, radial basis function, counterpropagation, general regression, fuzzy ARTMAP, and linear vector quantization neural networks. Each neural network predicts the probability of enrolment and nonenrolment for individual student applicants. Backpropagation networks produced the best overall performance. Network performance results are measured by the reduction in counsellors student case load and corresponding increases in student enrolment. The backpropagation neural networks achieve a 56% reduction in counsellor case load. PMID:18255946

  13. Competitive epidemic spreading over arbitrary multilayer networks

    NASA Astrophysics Data System (ADS)

    Darabi Sahneh, Faryad; Scoglio, Caterina

    2014-06-01

    This study extends the Susceptible-Infected-Susceptible (SIS) epidemic model for single-virus propagation over an arbitrary graph to an Susceptible-Infected by virus 1-Susceptible-Infected by virus 2-Susceptible (SI1SI2S) epidemic model of two exclusive, competitive viruses over a two-layer network with generic structure, where network layers represent the distinct transmission routes of the viruses. We find analytical expressions determining extinction, coexistence, and absolute dominance of the viruses after we introduce the concepts of survival threshold and absolute-dominance threshold. The main outcome of our analysis is the discovery and proof of a region for long-term coexistence of competitive viruses in nontrivial multilayer networks. We show coexistence is impossible if network layers are identical yet possible if network layers are distinct. Not only do we rigorously prove a region of coexistence, but we can quantitate it via interrelation of central nodes across the network layers. Little to no overlapping of the layers' central nodes is the key determinant of coexistence. For example, we show both analytically and numerically that positive correlation of network layers makes it difficult for a virus to survive, while in a network with negatively correlated layers, survival is easier, but total removal of the other virus is more difficult.

  14. Multilayer Network Analysis of Nuclear Reactions.

    PubMed

    Zhu, Liang; Ma, Yu-Gang; Chen, Qu; Han, Ding-Ding

    2016-01-01

    The nuclear reaction network is usually studied via precise calculation of differential equation sets, and much research interest has been focused on the characteristics of nuclides, such as half-life and size limit. In this paper, however, we adopt the methods from both multilayer and reaction networks, and obtain a distinctive view by mapping all the nuclear reactions in JINA REACLIB database into a directed network with 4 layers: neutron, proton, (4)He and the remainder. The layer names correspond to reaction types decided by the currency particles consumed. This combined approach reveals that, in the remainder layer, the β-stability has high correlation with node degree difference and overlapping coefficient. Moreover, when reaction rates are considered as node strength, we find that, at lower temperatures, nuclide half-life scales reciprocally with its out-strength. The connection between physical properties and topological characteristics may help to explore the boundary of the nuclide chart. PMID:27558995

  15. Multilayer Network Analysis of Nuclear Reactions

    PubMed Central

    Zhu, Liang; Ma, Yu-Gang; Chen, Qu; Han, Ding-Ding

    2016-01-01

    The nuclear reaction network is usually studied via precise calculation of differential equation sets, and much research interest has been focused on the characteristics of nuclides, such as half-life and size limit. In this paper, however, we adopt the methods from both multilayer and reaction networks, and obtain a distinctive view by mapping all the nuclear reactions in JINA REACLIB database into a directed network with 4 layers: neutron, proton, 4He and the remainder. The layer names correspond to reaction types decided by the currency particles consumed. This combined approach reveals that, in the remainder layer, the β-stability has high correlation with node degree difference and overlapping coefficient. Moreover, when reaction rates are considered as node strength, we find that, at lower temperatures, nuclide half-life scales reciprocally with its out-strength. The connection between physical properties and topological characteristics may help to explore the boundary of the nuclide chart. PMID:27558995

  16. The Effect of Network Parameters on Pi-Sigma Neural Network for Temperature Forecasting

    NASA Astrophysics Data System (ADS)

    Husaini, Noor Aida; Ghazali, Rozaida; Nawi, Nazri Mohd; Ismail, Lokman Hakim

    In this paper, we present the effect of network parameters to forecast temperature of a suburban area in Batu Pahat, Johor. The common ways of predicting the temperature using Neural Network has been applied for most meteorological parameters. However, researchers frequently neglected the network parameters which might affect the Neural Network's performance. Therefore, this study tends to explore the effect of network parameters by using Pi Sigma Neural Network (PSNN) with backpropagation algorithm. The network's performance is evaluated using the historical dataset of temperature in Batu Pahat for one step-ahead and benchmarked against Multilayer Perceptron (MLP) for comparison. We found out that, network parameters have significantly affected the performance of PSNN for temperature forecasting. Towards the end of this paper, we concluded the best forecasting model to predict the temperature based on the comparison of our study.

  17. Automatic Analysis of Radio Meteor Events Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Roman, Victor Ştefan; Buiu, Cătălin

    2015-12-01

    Meteor Scanning Algorithms (MESCAL) is a software application for automatic meteor detection from radio recordings, which uses self-organizing maps and feedforward multi-layered perceptrons. This paper aims to present the theoretical concepts behind this application and the main features of MESCAL, showcasing how radio recordings are handled, prepared for analysis, and used to train the aforementioned neural networks. The neural networks trained using MESCAL allow for valuable detection results, such as high correct detection rates and low false-positive rates, and at the same time offer new possibilities for improving the results.

  18. Automatic Analysis of Radio Meteor Events Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Roman, Victor Ştefan; Buiu, Cătălin

    2015-07-01

    Meteor Scanning Algorithms (MESCAL) is a software application for automatic meteor detection from radio recordings, which uses self-organizing maps and feedforward multi-layered perceptrons. This paper aims to present the theoretical concepts behind this application and the main features of MESCAL, showcasing how radio recordings are handled, prepared for analysis, and used to train the aforementioned neural networks. The neural networks trained using MESCAL allow for valuable detection results, such as high correct detection rates and low false-positive rates, and at the same time offer new possibilities for improving the results.

  19. Multiprocessor Neural Network in Healthcare.

    PubMed

    Godó, Zoltán Attila; Kiss, Gábor; Kocsis, Dénes

    2015-01-01

    A possible way of creating a multiprocessor artificial neural network is by the use of microcontrollers. The RISC processors' high performance and the large number of I/O ports mean they are greatly suitable for creating such a system. During our research, we wanted to see if it is possible to efficiently create interaction between the artifical neural network and the natural nervous system. To achieve as much analogy to the living nervous system as possible, we created a frequency-modulated analog connection between the units. Our system is connected to the living nervous system through 128 microelectrodes. Two-way communication is provided through A/D transformation, which is even capable of testing psychopharmacons. The microcontroller-based analog artificial neural network can play a great role in medical singal processing, such as ECG, EEG etc. PMID:26152990

  20. Neural network ultrasound image analysis

    NASA Astrophysics Data System (ADS)

    Schneider, Alexander C.; Brown, David G.; Pastel, Mary S.

    1993-09-01

    Neural network based analysis of ultrasound image data was carried out on liver scans of normal subjects and those diagnosed with diffuse liver disease. In a previous study, ultrasound images from a group of normal volunteers, Gaucher's disease patients, and hepatitis patients were obtained by Garra et al., who used classical statistical methods to distinguish from among these three classes. In the present work, neural network classifiers were employed with the same image features found useful in the previous study for this task. Both standard backpropagation neural networks and a recently developed biologically-inspired network called Dystal were used. Classification performance as measured by the area under a receiver operating characteristic curve was generally excellent for the back propagation networks and was roughly comparable to that of classical statistical discriminators tested on the same data set and documented in the earlier study. Performance of the Dystal network was significantly inferior; however, this may be due to the choice of network parameter. Potential methods for enhancing network performance was identified.

  1. Long-term multilayer adherent network (MAN) expansion, maintenance, and characterization, chemical and genetic manipulation, and transplantation of human fetal forebrain neural stem cells.

    PubMed

    Wakeman, Dustin R; Hofmann, Martin R; Redmond, D Eugene; Teng, Yang D; Snyder, Evan Y

    2009-05-01

    Human neural stem/precursor cells (hNSC/hNPC) have been targeted for application in a variety of research models and as prospective candidates for cell-based therapeutic modalities in central nervous system (CNS) disorders. To this end, the successful derivation, expansion, and sustained maintenance of undifferentiated hNSC/hNPC in vitro, as artificial expandable neurogenic micro-niches, promises a diversity of applications as well as future potential for a variety of experimental paradigms modeling early human neurogenesis, neuronal migration, and neurogenetic disorders, and could also serve as a platform for small-molecule drug screening in the CNS. Furthermore, hNPC transplants provide an alternative substrate for cellular regeneration and restoration of damaged tissue in neurodegenerative disorders such as Parkinson's disease and Alzheimer's disease. Human somatic neural stem/progenitor cells (NSC/NPC) have been derived from a variety of cadaveric sources and proven engraftable in a cytoarchitecturally appropriate manner into the developing and adult rodent and monkey brain while maintaining both functional and migratory capabilities in pathological models of disease. In the following unit, we describe a new procedure that we have successfully employed to maintain operationally defined human somatic NSC/NPC from developing fetal, pre-term post-natal, and adult cadaveric forebrain. Specifically, we outline the detailed methodology for in vitro expansion, long-term maintenance, manipulation, and transplantation of these multipotent precursors. PMID:19455542

  2. Plant Growth Models Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  3. Centroid calculation using neural networks

    NASA Astrophysics Data System (ADS)

    Himes, Glenn S.; Inigo, Rafael M.

    1992-01-01

    Centroid calculation provides a means of eliminating translation problems, which is useful for automatic target recognition. a neural network implementation of centroid calculation is described that used a spatial filter and a Hopfield network to determine the centroid location of an object. spatial filtering of a segmented window creates a result whose peak vale occurs at the centroid of the input data set. A Hopfield network then finds the location of this peak and hence gives the location of the centroid. Hardware implementations of the networks are described and simulation results are provided.

  4. Neural Networks for Flight Control

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  5. Using Hybrid Algorithm to Improve Intrusion Detection in Multi Layer Feed Forward Neural Networks

    ERIC Educational Resources Information Center

    Ray, Loye Lynn

    2014-01-01

    The need for detecting malicious behavior on a computer networks continued to be important to maintaining a safe and secure environment. The purpose of this study was to determine the relationship of multilayer feed forward neural network architecture to the ability of detecting abnormal behavior in networks. This involved building, training, and…

  6. Neural networks and applications tutorial

    NASA Astrophysics Data System (ADS)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  7. Artificial neural networks in medicine

    SciTech Connect

    Keller, P.E.

    1994-07-01

    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  8. Neural networks for handwriting recognition

    NASA Astrophysics Data System (ADS)

    Kelly, David A.

    1992-09-01

    The market for a product that can read handwritten forms, such as insurance applications, re- order forms, or checks, is enormous. Companies could save millions of dollars each year if they had an effective and efficient way to read handwritten forms into a computer without human intervention. Urged on by the potential gold mine that an adequate solution would yield, a number of companies and researchers have developed, and are developing, neural network-based solutions to this long-standing problem. This paper briefly outlines the current state-of-the-art in neural network-based handwriting recognition research and products. The first section of the paper examines the potential market for this technology. The next section outlines the steps in the recognition process, followed by a number of the basic issues that need to be dealt with to solve the recognition problem in a real-world setting. Next, an overview of current commercial solutions and research projects shows the different ways that neural networks are applied to the problem. This is followed by a breakdown of the current commercial market and the future outlook for neural network-based handwriting recognition technology.

  9. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  10. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  11. Parameter incremental learning algorithm for neural networks.

    PubMed

    Wan, Sheng; Banta, Larry E

    2006-11-01

    In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658

  12. Prospecting droughts with stochastic artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ochoa-Rivera, Juan Camilo

    2008-04-01

    SummaryA non-linear multivariate model based on an artificial neural network multilayer perceptron is presented, that includes a random component. The developed model is applied to generate monthly streamflows, which are used to obtain synthetic annual droughts. The calibration of the model was undertaken using monthly streamflow records of several geographical sites of a basin. The model calibration consisted of training the neural network with the error back-propagation learning algorithm, and adding a normally distributed random noise. The model was validated by comparing relevant statistics of synthetic streamflow series to those of historical records. Annual droughts were calculated from the generated streamflow series, and then the expected values of length, intensity and magnitude of the droughts were assessed. An exercise on identical basis was made applying a second order auto-regressive multivariate model, AR(2), to compare its results with those of the developed model. The proposed model outperforms the AR(2) model in reproducing the future drought scenarios.

  13. Neural networks: A versatile tool from artificial intelligence

    SciTech Connect

    Yama, B.R.; Lineberry, G.T.

    1996-12-31

    Artificial Intelligence research has produced several tools for commercial application in recent years. Artificial Neural Networks (ANNs), Fuzzy Logic, and Expert Systems are some of the techniques that are widely used today in various fields of engineering and business. Among these techniques, ANNs are gaining popularity due to their learning and other brain-like capabilities. Within the mining industry, ANN technology is being utilized with large payoffs for real-time process control applications. In this paper, a brief introduction to ANNs and the associated terminology is given. The neural network development process is outlined, followed by the back-propagation learning algorithm. Next, the development of two multi-layer, feed-forward neural networks is described and the results axe presented. One network is developed for prediction of strength of intact rock specimens, and another network is developed for prediction of mineral concentrations. Preliminary results indicate a predictive error less than 10% using cross-validation on a limited data set. The performance of the neural network for prediction of mineral concentrations was compared with kriging. It was found that the neural network performed not only satisfactorily, but in some cases performed better than, the kriging model.

  14. On degree-degree correlations in multilayer networks

    NASA Astrophysics Data System (ADS)

    de Arruda, Guilherme Ferraz; Cozzo, Emanuele; Moreno, Yamir; Rodrigues, Francisco A.

    2016-06-01

    We propose a generalization of the concept of assortativity based on the tensorial representation of multilayer networks, covering the definitions given in terms of Pearson and Spearman coefficients. Our approach can also be applied to weighted networks and provides information about correlations considering pairs of layers. By analyzing the multilayer representation of the airport transportation network, we show that contrasting results are obtained when the layers are analyzed independently or as an interconnected system. Finally, we study the impact of the level of assortativity and heterogeneity between layers on the spreading of diseases. Our results highlight the need of studying degree-degree correlations on multilayer systems, instead of on aggregated networks.

  15. A spiking neural network architecture for nonlinear function approximation.

    PubMed

    Iannella, N; Back, A D

    2001-01-01

    Multilayer perceptrons have received much attention in recent years due to their universal approximation capabilities. Normally, such models use real valued continuous signals, although they are loosely based on biological neuronal networks that encode signals using spike trains. Spiking neural networks are of interest both from a biological point of view and in terms of a method of robust signaling in particularly noisy or difficult environments. It is important to consider networks based on spike trains. A basic question that needs to be considered however, is what type of architecture can be used to provide universal function approximation capabilities in spiking networks? In this paper, we propose a spiking neural network architecture using both integrate-and-fire units as well as delays, that is capable of approximating a real valued function mapping to within a specified degree of accuracy. PMID:11665783

  16. Overview of artificial neural networks.

    PubMed

    Zou, Jinming; Han, Yi; So, Sung-Sau

    2008-01-01

    The artificial neural network (ANN), or simply neural network, is a machine learning method evolved from the idea of simulating the human brain. The data explosion in modem drug discovery research requires sophisticated analysis methods to uncover the hidden causal relationships between single or multiple responses and a large set of properties. The ANN is one of many versatile tools to meet the demand in drug discovery modeling. Compared to a traditional regression approach, the ANN is capable of modeling complex nonlinear relationships. The ANN also has excellent fault tolerance and is fast and highly scalable with parallel processing. This chapter introduces the background of ANN development and outlines the basic concepts crucially important for understanding more sophisticated ANN. Several commonly used learning methods and network setups are discussed briefly at the end of the chapter. PMID:19065803

  17. Neural Networks For Visual Telephony

    NASA Astrophysics Data System (ADS)

    Gottlieb, A. M.; Alspector, J.; Huang, P.; Hsing, T. R.

    1988-10-01

    By considering how an image is processed by the eye and brain, we may find ways to simplify the task of transmitting complex video images over a telecommunication channel. Just as the retina and visual cortex reduce the amount of information sent to other areas of the brain, electronic systems can be designed to compress visual data, encode features, and adapt to new scenes for video transmission. In this talk, we describe a system inspired by models of neural computation that may, in the future, augment standard digital processing techniques for image compression. In the next few years it is expected that a compact low-cost full motion video telephone operating over an ISDN basic access line (144 KBits/sec) will be shown to be feasible. These systems will likely be based on a standard digital signal processing approach. In this talk, we discuss an alternative method that does not use standard digital signal processing but instead uses eletronic neural networks to realize the large compression necessary for a low bit-rate video telephone. This neural network approach is not being advocated as a near term solution for visual telephony. However, low bit rate visual telephony is an area where neural network technology may, in the future, find a significant application.

  18. Syntactic neural network for character recognition

    NASA Astrophysics Data System (ADS)

    Jaravine, Viktor A.

    1992-08-01

    This article presents a synergism of syntactic 2-D parsing of images and multilayered, feed- forward network techniques. This approach makes it possible to build a written text reading system with absolute recognition rate for unambiguous text strings. The Syntactic Neural Network (SNN) is created during image parsing process by capturing the higher order statistical structure in the ensemble of input image examples. Acquired knowledge is stored in the form of hierarchical image elements dictionary and syntactic network. The number of hidden layers and neuron units is not fixed and is determined by the structural complexity of the teaching set. A proposed syntactic neuron differs from conventional numerical neuron by its symbolic input/output and usage of the dictionary for determining the output. This approach guarantees exact recognition of an image that is a combinatorial variation of the images from the training set. The system is taught to generalize and to make stochastic parsing of distorted and shifted patterns. The generalizations enables the system to perform continuous incremental optimization of its work. New image data learned by SNN doesn''t interfere with previously stored knowledge, thus leading to unlimited storage capacity of the network.

  19. An optimization methodology for neural network weights and architectures.

    PubMed

    Ludermir, Teresa B; Yamazaki, Akio; Zanchettin, Cleber

    2006-11-01

    This paper introduces a methodology for neural network global optimization. The aim is the simultaneous optimization of multilayer perceptron (MLP) network weights and architectures, in order to generate topologies with few connections and high classification performance for any data sets. The approach combines the advantages of simulated annealing, tabu search and the backpropagation training algorithm in order to generate an automatic process for producing networks with high classification performance and low complexity. Experimental results obtained with four classification problems and one prediction problem has shown to be better than those obtained by the most commonly used optimization techniques. PMID:17131660

  20. Reducing the dimensionality of data with neural networks.

    PubMed

    Hinton, G E; Salakhutdinov, R R

    2006-07-28

    High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data. PMID:16873662

  1. Ranking in interconnected multilayer networks reveals versatile nodes

    NASA Astrophysics Data System (ADS)

    de Domenico, Manlio; Solé-Ribalta, Albert; Omodei, Elisa; Gómez, Sergio; Arenas, Alex

    2015-04-01

    The determination of the most central agents in complex networks is important because they are responsible for a faster propagation of information, epidemics, failures and congestion, among others. A challenging problem is to identify them in networked systems characterized by different types of interactions, forming interconnected multilayer networks. Here we describe a mathematical framework that allows us to calculate centrality in such networks and rank nodes accordingly, finding the ones that play the most central roles in the cohesion of the whole structure, bridging together different types of relations. These nodes are the most versatile in the multilayer network. We investigate empirical interconnected multilayer networks and show that the approaches based on aggregating--or neglecting--the multilayer structure lead to a wrong identification of the most versatile nodes, overestimating the importance of more marginal agents and demonstrating the power of versatility in predicting their role in diffusive and congestion processes.

  2. Validation and regulation of medical neural networks.

    PubMed

    Rodvold, D M

    2001-01-01

    Using artificial neural networks (ANNs) in medical applications can be challenging because of the often-experimental nature of ANN construction and the "black box" label that is frequently attached to them. In the US, medical neural networks are regulated by the Food and Drug Administration. This article briefly discusses the documented FDA policy on neural networks and the various levels of formal acceptance that neural network development groups might pursue. To assist medical neural network developers in creating robust and verifiable software, this paper provides a development process model targeted specifically to ANNs for critical applications. PMID:11790274

  3. Standard Cell-Based Implementation of a Digital Optoelectronic Neural-Network Hardware

    NASA Astrophysics Data System (ADS)

    Maier, Klaus D.; Beckstein, Clemens; Blickhan, Reinhard; Erhard, Werner

    2001-03-01

    A standard cell-based implementation of a digital optoelectronic neural-network architecture is presented. The overall structure of the multilayer perceptron network that was used, the optoelectronic interconnection system between the layers, and all components required in each layer are defined. The design process from VHDL-based modeling from synthesis and partly automatic placing and routing to the final editing of one layer of the circuit of the multilayer perceptrons are described. A suitable approach for the standard cell-based design of optoelectronic systems is presented, and shortcomings of the design tool that was used are pointed out. The layout for the microelectronic circuit of one layer in a multilayer perceptron neural network with a performance potential 1 magnitude higher than neural networks that are purely electronic based has been successfully designed.

  4. Standard cell-based implementation of a digital optoelectronic neural-network hardware.

    PubMed

    Maier, K D; Beckstein, C; Blickhan, R; Erhard, W

    2001-03-10

    A standard cell-based implementation of a digital optoelectronic neural-network architecture is presented. The overall structure of the multilayer perceptron network that was used, the optoelectronic interconnection system between the layers, and all components required in each layer are defined. The design process from VHDL-based modeling from synthesis and partly automatic placing and routing to the final editing of one layer of the circuit of the multilayer perceptrons are described. A suitable approach for the standard cell-based design of optoelectronic systems is presented, and shortcomings of the design tool that was used are pointed out. The layout for the microelectronic circuit of one layer in a multilayer perceptron neural network with a performance potential 1 magnitude higher than neural networks that are purely electronic based has been successfully designed. PMID:18357111

  5. A neural networks study of quinone compounds with trypanocidal activity.

    PubMed

    de Molfetta, Fábio Alberto; Angelotti, Wagner Fernando Delfino; Romero, Roseli Aparecida Francelin; Montanari, Carlos Alberto; da Silva, Albérico Borges Ferreira

    2008-10-01

    This work investigates neural network models for predicting the trypanocidal activity of 28 quinone compounds. Artificial neural networks (ANN), such as multilayer perceptrons (MLP) and Kohonen models, were employed with the aim of modeling the nonlinear relationship between quantum and molecular descriptors and trypanocidal activity. The calculated descriptors and the principal components were used as input to train neural network models to verify the behavior of the nets. The best model for both network models (MLP and Kohonen) was obtained with four descriptors as input. The descriptors were T5 (torsion angle), QTS1 (sum of absolute values of the atomic charges), VOLS2 (volume of the substituent at region B) and HOMO-1 (energy of the molecular orbital below HOMO). These descriptors provide information on the kind of interaction that occurs between the compounds and the biological receptor. Both neural network models used here can predict the trypanocidal activity of the quinone compounds with good agreement, with low errors in the testing set and a high correctness rate. Thanks to the nonlinear model obtained from the neural network models, we can conclude that electronic and structural properties are important factors in the interaction between quinone compounds that exhibit trypanocidal activity and their biological receptors. The final ANN models should be useful in the design of novel trypanocidal quinones having improved potency. PMID:18629551

  6. The design and analysis of effective and efficient neural networks and their applications

    SciTech Connect

    Makovoz, W.V.

    1989-01-01

    A complicated design issue of efficient Multilayer neural networks is addressed, and the perception and similar neural networks are examined. It shows that a three-layer perceptron neural network with specially designed learning algorithms provides an efficient framework to solve an exclusive OR problem using only n {minus} 1 processing elements in the second layer. Two efficient rapidly converging algorithms for any symmetric Boolean function were developed using only n {minus} 1 processing elements in the perceptron neural network and int(n/2) processing elements in the Adaline and perceptron neural network with the stepfunction transfer function. Similar results were obtained for the quasi-symmetric Boolean functions using a linear number of processing elements in perceptron neural networks, Adaline's, and perceptron neural networks with the stepfunction transfer functions. Generalized Boolean functions are discussed and two rapidly converging algorithms are shown for perceptron neural networks, Adaline's, and perceptron neural network with stepfunction transfer function. Many other interesting perceptron neural networks are discussed in the dissertation. Perceptron neural networks are applied to find the largest value of the n inputs. A new perceptron neural network is designed to find the largest value of the n inputs with the minimum number of inputs and the minimum number of layers. New perceptron neural networks are developed to sort n inputs. New, effective and efficient back-propagation Neural networks are designed to sort n inputs. The Sigmoid transfer function was discussed and a generalized Sigmoid function to improve Neural network performance was developed. A modified back-propagation learning algorithm was developed that builds any n input symmetric Boolean function using only int(n/2) processing elements in the second layer.

  7. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  8. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    PubMed Central

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  9. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    PubMed

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  10. The LILARTI neural network system

    SciTech Connect

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  11. The hysteretic Hopfield neural network.

    PubMed

    Bharitkar, S; Mendel, J M

    2000-01-01

    A new neuron activation function based on a property found in physical systems--hysteresis--is proposed. We incorporate this neuron activation in a fully connected dynamical system to form the hysteretic Hopfield neural network (HHNN). We then present an analog implementation of this architecture and its associated dynamical equation and energy function.We proceed to prove Lyapunov stability for this new model, and then solve a combinatorial optimization problem (i.e., the N-queen problem) using this network. We demonstrate the advantages of hysteresis by showing increased frequency of convergence to a solution, when the parameters associated with the activation function are varied. PMID:18249816

  12. Epidemic Model with Isolation in Multilayer Networks

    PubMed Central

    Zuzek, L. G. Alvarez; Stanley, H. E.; Braunstein, L. A.

    2015-01-01

    The Susceptible-Infected-Recovered (SIR) model has successfully mimicked the propagation of such airborne diseases as influenza A (H1N1). Although the SIR model has recently been studied in a multilayer networks configuration, in almost all the research the isolation of infected individuals is disregarded. Hence we focus our study in an epidemic model in a two-layer network, and we use an isolation parameter w to measure the effect of quarantining infected individuals from both layers during an isolation period tw. We call this process the Susceptible-Infected-Isolated-Recovered (SIIR) model. Using the framework of link percolation we find that isolation increases the critical epidemic threshold of the disease because the time in which infection can spread is reduced. In this scenario we find that this threshold increases with w and tw. When the isolation period is maximum there is a critical threshold for w above which the disease never becomes an epidemic. We simulate the process and find an excellent agreement with the theoretical results. PMID:26173897

  13. Epidemic Model with Isolation in Multilayer Networks

    NASA Astrophysics Data System (ADS)

    Zuzek, L. G. Alvarez; Stanley, H. E.; Braunstein, L. A.

    2015-07-01

    The Susceptible-Infected-Recovered (SIR) model has successfully mimicked the propagation of such airborne diseases as influenza A (H1N1). Although the SIR model has recently been studied in a multilayer networks configuration, in almost all the research the isolation of infected individuals is disregarded. Hence we focus our study in an epidemic model in a two-layer network, and we use an isolation parameter w to measure the effect of quarantining infected individuals from both layers during an isolation period tw. We call this process the Susceptible-Infected-Isolated-Recovered (SIIR) model. Using the framework of link percolation we find that isolation increases the critical epidemic threshold of the disease because the time in which infection can spread is reduced. In this scenario we find that this threshold increases with w and tw. When the isolation period is maximum there is a critical threshold for w above which the disease never becomes an epidemic. We simulate the process and find an excellent agreement with the theoretical results.

  14. Load forecasting using artificial neural networks

    SciTech Connect

    Pham, K.D.

    1995-12-31

    Artificial neural networks, modeled after their biological counterpart, have been successfully applied in many diverse areas including speech and pattern recognition, remote sensing, electrical power engineering, robotics and stock market forecasting. The most commonly used neural networks are those that gained knowledge from experience. Experience is presented to the network in form of the training data. Once trained, the neural network can recognized data that it has not seen before. This paper will present a fundamental introduction to the manner in which neural networks work and how to use them in load forecasting.

  15. Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks

    PubMed Central

    Maca, Petr; Pech, Pavel

    2016-01-01

    The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948–2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons. PMID:26880875

  16. Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks.

    PubMed

    Maca, Petr; Pech, Pavel

    2016-01-01

    The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948-2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons. PMID:26880875

  17. Neural network modeling of emotion

    NASA Astrophysics Data System (ADS)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  18. Neural networks for aircraft system identification

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.

    1991-01-01

    Artificial neural networks offer some interesting possibilities for use in control. Our current research is on the use of neural networks on an aircraft model. The model can then be used in a nonlinear control scheme. The effectiveness of network training is demonstrated.

  19. Neural networks and MIMD-multiprocessors

    NASA Technical Reports Server (NTRS)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  20. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  1. Neural Networks in Nonlinear Aircraft Control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.

    1990-01-01

    Recent research indicates that artificial neural networks offer interesting learning or adaptive capabilities. The current research focuses on the potential for application of neural networks in a nonlinear aircraft control law. The current work has been to determine which networks are suitable for such an application and how they will fit into a nonlinear control law.

  2. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  3. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  4. Practical application of artificial neural networks in the neurosciences

    NASA Astrophysics Data System (ADS)

    Pinti, Antonio

    1995-04-01

    This article presents a practical application of artificial multi-layer perceptron (MLP) neural networks in neurosciences. The data that are processed are labeled data from the visual analysis of electrical signals of human sleep. The objective of this work is to automatically classify into sleep stages the electrophysiological signals recorded from electrodes placed on a sleeping patient. Two large data bases were designed by experts in order to realize this study. One data base was used to train the network and the other to test its generalization capacity. The classification results obtained with the MLP network were compared to a type K nearest neighbor Knn non-parametric classification method. The MLP network gave a better result in terms of classification than the Knn method. Both classification techniques were implemented on a transputer system. With both networks in their final configuration, the MLP network was 160 times faster than the Knn model in classifying a sleep period.

  5. Inversion of parameters for semiarid regions by a neural network

    NASA Technical Reports Server (NTRS)

    Zurk, Lisa M.; Davis, Daniel; Njoku, Eni G.; Tsang, Leung; Hwang, Jenq-Neng

    1992-01-01

    Microwave brightness temperatures obtained from a passive radiative transfer model are inverted through use of a neural network. The model is applicable to semiarid regions and produces dual-polarized brightness temperatures for 6.6-, 10.7-, and 37-GHz frequencies. A range of temperatures is generated by varying three geophysical parameters over acceptable ranges: soil moisture, vegetation moisture, and soil temperature. A multilayered perceptron (MLP) neural network is trained with a subset of the generated temperatures, and the remaining temperatures are inverted using a backpropagation method. Several synthetic terrains are devised and inverted by the network under local constraints. All the inversions show good agreement with the original geophysical parameters, falling within 5 percent of the actual value of the parameter range.

  6. Diagnosis of hepatitis by use of neural network learning

    NASA Astrophysics Data System (ADS)

    Fan, Hong-Qing; Zhang, Qy-zi

    1994-03-01

    An attempt is made to find a new way for better diagnosis of hepatisis through application of artificial neural network theory. Learning from a given sample set, the neural network is used to establish a nonlinear mapping between various factors, such as symptoms, signs, and laboratorial experiments, and diagnosis of hepatisis. It is proved that the used network and values of weight after learning are available to the identification of equivalent class of a new pattern of hepatisis. In this paper, the knowledge learning and learning algorithms used in diagnosis are mainly discussed, an optimal generalization algorithm based on the error decrease algorithm and used to train multilayer feedforward is presented; meanwhile, the application results and their effectiveness are introduced.

  7. Real-time EFIT data reconstruction based on neural network in KSTAR

    NASA Astrophysics Data System (ADS)

    Kwak, Sehyun; Jeon, Youngmu; Ghim, Young-Chul

    2014-10-01

    Real-time EFIT data can be obtained using a neural network method. A non-linear mapping between diagnostic signals and shaping parameters of plasma equilibrium can be established by the neural network, particularly with the multilayer perceptron. The neural network is utilized to attain real-time EFIT data for Korea Superconducting Tokamak for Advanced Research (KSTAR). We collect and process existing datasets of measured data and EFIT data to train and test the neural network. Parameter scans such as the numbers of hidden layers and hidden units were performed in order to find the optimal condition. EFIT data from the neural network was compared with both offline EFIT and real-time EFIT data. Finally, we discuss advantages of using neutral network reconstructed EFIT data for real time plasma control.

  8. Adaptive optimization and control using neural networks

    SciTech Connect

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  9. Noise-robust realization of Turing-complete cellular automata by using neural networks with pattern representation

    NASA Astrophysics Data System (ADS)

    Oku, Makito; Aihara, Kazuyuki

    2010-11-01

    A modularly-structured neural network model is considered. Each module, which we call a ‘cell’, consists of two parts: a Hopfield neural network model and a multilayered perceptron. An array of such cells is used to simulate the Rule 110 cellular automaton with high accuracy even when all the units of neural networks are replaced by stochastic binary ones. We also find that noise not only degrades but also facilitates computation if the outputs of multilayered perceptrons are below the threshold required to update the states of the cells, which is a stochastic resonance in computation.

  10. Complexity matching in neural networks

    NASA Astrophysics Data System (ADS)

    Usefie Mafahim, Javad; Lambert, David; Zare, Marzieh; Grigolini, Paolo

    2015-01-01

    In the wide literature on the brain and neural network dynamics the notion of criticality is being adopted by an increasing number of researchers, with no general agreement on its theoretical definition, but with consensus that criticality makes the brain very sensitive to external stimuli. We adopt the complexity matching principle that the maximal efficiency of communication between two complex networks is realized when both of them are at criticality. We use this principle to establish the value of the neuronal interaction strength at which criticality occurs, yielding a perfect agreement with the adoption of temporal complexity as criticality indicator. The emergence of a scale-free distribution of avalanche size is proved to occur in a supercritical regime. We use an integrate-and-fire model where the randomness of each neuron is only due to the random choice of a new initial condition after firing. The new model shares with that proposed by Izikevich the property of generating excessive periodicity, and with it the annihilation of temporal complexity at supercritical values of the interaction strength. We find that the concentration of inhibitory links can be used as a control parameter and that for a sufficiently large concentration of inhibitory links criticality is recovered again. Finally, we show that the response of a neural network at criticality to a harmonic stimulus is very weak, in accordance with the complexity matching principle.

  11. Advances in neural networks research: an introduction.

    PubMed

    Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar

    2009-01-01

    The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications. PMID:19632811

  12. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  13. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  14. Neural network modeling of distillation columns

    SciTech Connect

    Baratti, R.; Vacca, G.; Servida, A.

    1995-06-01

    Neural network modeling (NNM) was implemented for monitoring and control applications on two actual distillation columns: the butane splitter tower and the gasoline stabilizer. The two distillation columns are in operation at the SARAS refinery. Results show that with proper implementation techniques NNM can significantly improve column operation. The common belief that neural networks can be used as black-box process models is not completely true. Effective implementation always requires a minimum degree of process knowledge to identify the relevant inputs to the net. After background and generalities on neural network modeling, the paper describes efforts on the development of neural networks for the two distillation units.

  15. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  16. Aerodynamic Design Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.

    2003-01-01

    The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.

  17. Cyclone track forecasting based on satellite images using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Kovordányi, Rita; Roy, Chandan

    Many places around the world are exposed to tropical cyclones and associated storm surges. In spite of massive efforts, a great number of people die each year as a result of cyclone events. To mitigate this damage, improved forecasting techniques must be developed. The technique presented here uses artificial neural networks to interpret NOAA-AVHRR satellite images. A multi-layer neural network, resembling the human visual system, was trained to forecast the movement of cyclones based on satellite images. The trained network produced correct directional forecast for 98% of test images, thus showing a good generalization capability. The results indicate that multi-layer neural networks could be further developed into an effective tool for cyclone track forecasting using various types of remote sensing data. Future work includes extension of the present network to handle a wide range of cyclones and to take into account supplementary information, such as wind speeds, water temperature, humidity, and air pressure.

  18. Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks

    SciTech Connect

    Ziaul Huque

    2007-08-31

    This is the final technical report for the project titled 'Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks'. The aim of the project was to develop an efficient chemistry model for combustion simulations. The reduced chemistry model was developed mathematically without the need of having extensive knowledge of the chemistry involved. To aid in the development of the model, Neural Networks (NN) was used via a new network topology known as Non-linear Principal Components Analysis (NPCA). A commonly used Multilayer Perceptron Neural Network (MLP-NN) was modified to implement NPCA-NN. The training rate of NPCA-NN was improved with the GEneralized Regression Neural Network (GRNN) based on kernel smoothing techniques. Kernel smoothing provides a simple way of finding structure in data set without the imposition of a parametric model. The trajectory data of the reaction mechanism was generated based on the optimization techniques of genetic algorithm (GA). The NPCA-NN algorithm was then used for the reduction of Dimethyl Ether (DME) mechanism. DME is a recently discovered fuel made from natural gas, (and other feedstock such as coal, biomass, and urban wastes) which can be used in compression ignition engines as a substitute for diesel. An in-house two-dimensional Computational Fluid Dynamics (CFD) code was developed based on Meshfree technique and time marching solution algorithm. The project also provided valuable research experience to two graduate students.

  19. Neural networks for nuclear spectroscopy

    SciTech Connect

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T.

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  20. Neural Network Classifies Teleoperation Data

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Giancaspro, Antonio; Losito, Sergio; Pasquariello, Guido

    1994-01-01

    Prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on manipulator hand. Prototype is early, subsystem-level product of continuing effort to develop automated system that assists in training and supervising human control operator: provides symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to operator in real time during successive executions of same task. Also simplifies transition between teleoperation and autonomous modes of telerobotic system.

  1. The Laplacian spectrum of neural networks

    PubMed Central

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  2. Ozone Modeling Using Neural Networks.

    NASA Astrophysics Data System (ADS)

    Narasimhan, Ramesh; Keller, Joleen; Subramaniam, Ganesh; Raasch, Eric; Croley, Brandon; Duncan, Kathleen; Potter, William T.

    2000-03-01

    Ozone models for the city of Tulsa were developed using neural network modeling techniques. The neural models were developed using meteorological data from the Oklahoma Mesonet and ozone, nitric oxide, and nitrogen dioxide (NO2) data from Environmental Protection Agency monitoring sites in the Tulsa area. An initial model trained with only eight surface meteorological input variables and NO2 was able to simulate ozone concentrations with a correlation coefficient of 0.77. The trained model was then used to evaluate the sensitivity to the primary variables that affect ozone concentrations. The most important variables (NO2, temperature, solar radiation, and relative humidity) showed response curves with strong nonlinear codependencies. Incorporation of ozone concentrations from the previous 3 days into the model increased the correlation coefficient to 0.82. As expected, the ozone concentrations correlated best with the most recent (1-day previous) values. The model's correlation coefficient was increased to 0.88 by the incorporation of upper-air data from the National Weather Service's Nested Grid Model. Sensitivity analysis for the upper-air variables indicated unusual positive correlations between ozone and the relative humidity from 500 hPa to the tropopause in addition to the other expected correlations with upper-air temperatures, vertical wind velocity, and 1000-500-hPa layer thickness. The neural model results are encouraging for the further use of these systems to evaluate complex parameter cosensitivities, and for the use of these systems in automated ozone forecast systems.

  3. Three dimensional living neural networks

    NASA Astrophysics Data System (ADS)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  4. A neural network architecture for implementation of expert systems for real time monitoring

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.

    1991-01-01

    Since neural networks have the advantages of massive parallelism and simple architecture, they are good tools for implementing real time expert systems. In a rule based expert system, the antecedents of rules are in the conjunctive or disjunctive form. We constructed a multilayer feedforward type network in which neurons represent AND or OR operations of rules. Further, we developed a translator which can automatically map a given rule base into the network. Also, we proposed a new and powerful yet flexible architecture that combines the advantages of both fuzzy expert systems and neural networks. This architecture uses the fuzzy logic concepts to separate input data domains into several smaller and overlapped regions. Rule-based expert systems for time critical applications using neural networks, the automated implementation of rule-based expert systems with neural nets, and fuzzy expert systems vs. neural nets are covered.

  5. Neural networks for automated classification of ionospheric irregularities in HF radar backscattered signals

    NASA Astrophysics Data System (ADS)

    Wing, S.; Greenwald, R. A.; Meng, C.-I.; Sigillito, V. G.; Hutton, L. V.

    2003-08-01

    The classification of high frequency (HF) radar backscattered signals from the ionospheric irregularities (clutters) into those suitable, or not, for further analysis, is a time-consuming task even by experts in the field. We tested several different feedforward neural networks on this task, investigating the effects of network type (single layer versus multilayer) and number of hidden nodes upon performance. As expected, the multilayer feedforward networks (MLFNs) outperformed the single-layer networks. The MLFNs achieved performance levels of 100% correct on the training set and up to 98% correct on the testing set. Comparable figures for the single-layer networks were 94.5% and 92%, respectively. When measures of sensitivity, specificity, and proportion of variance accounted for by the model are considered, the superiority of the MLFNs over the single-layer networks is much more striking. Our results suggest that such neural networks could aid many HF radar operations such as frequency search, space weather, etc.

  6. Comparative study of different wavelet based neural network models for rainfall-runoff modeling

    NASA Astrophysics Data System (ADS)

    Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.

    2014-07-01

    The use of wavelet transformation in rainfall-runoff modeling has become popular because of its ability to simultaneously deal with both the spectral and the temporal information contained within time series data. The selection of an appropriate wavelet function plays a crucial role for successful implementation of the wavelet based rainfall-runoff artificial neural network models as it can lead to further enhancement in the model performance. The present study is therefore conducted to evaluate the effects of 23 mother wavelet functions on the performance of the hybrid wavelet based artificial neural network rainfall-runoff models. The hybrid Multilayer Perceptron Neural Network (MLPNN) and the Radial Basis Function Neural Network (RBFNN) models are developed in this study using both the continuous wavelet and the discrete wavelet transformation types. The performances of the 92 developed wavelet based neural network models with all the 23 mother wavelet functions are compared with the neural network models developed without wavelet transformations. It is found that among all the models tested, the discrete wavelet transform multilayer perceptron neural network (DWTMLPNN) and the discrete wavelet transform radial basis function (DWTRBFNN) models at decomposition level nine with the db8 wavelet function has the best performance. The result also shows that the pre-processing of input rainfall data by the wavelet transformation can significantly increases performance of the MLPNN and the RBFNN rainfall-runoff models.

  7. Artificial neural networks in neurosurgery.

    PubMed

    Azimi, Parisa; Mohammadi, Hasan Reza; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad; Montazeri, Ali

    2015-03-01

    Artificial neural networks (ANNs) effectively analyze non-linear data sets. The aimed was A review of the relevant published articles that focused on the application of ANNs as a tool for assisting clinical decision-making in neurosurgery. A literature review of all full publications in English biomedical journals (1993-2013) was undertaken. The strategy included a combination of key words 'artificial neural networks', 'prognostic', 'brain', 'tumor tracking', 'head', 'tumor', 'spine', 'classification' and 'back pain' in the title and abstract of the manuscripts using the PubMed search engine. The major findings are summarized, with a focus on the application of ANNs for diagnostic and prognostic purposes. Finally, the future of ANNs in neurosurgery is explored. A total of 1093 citations were identified and screened. In all, 57 citations were found to be relevant. Of these, 50 articles were eligible for inclusion in this review. The synthesis of the data showed several applications of ANN in neurosurgery, including: (1) diagnosis and assessment of disease progression in low back pain, brain tumours and primary epilepsy; (2) enhancing clinically relevant information extraction from radiographic images, intracranial pressure processing, low back pain and real-time tumour tracking; (3) outcome prediction in epilepsy, brain metastases, lumbar spinal stenosis, lumbar disc herniation, childhood hydrocephalus, trauma mortality, and the occurrence of symptomatic cerebral vasospasm in patients with aneurysmal subarachnoid haemorrhage; (4) the use in the biomechanical assessments of spinal disease. ANNs can be effectively employed for diagnosis, prognosis and outcome prediction in neurosurgery. PMID:24987050

  8. Computational acceleration using neural networks

    NASA Astrophysics Data System (ADS)

    Cadaret, Paul

    2008-04-01

    The author's recent participation in the Small Business Innovative Research (SBIR) program has resulted in the development of a patent pending technology that enables the construction of very large and fast artificial neural networks. Through the use of UNICON's CogniMax pattern recognition technology we believe that systems can be constructed that exploit the power of "exhaustive learning" for the benefit of certain types of complex and slow computational problems. This paper presents a theoretical study that describes one potentially beneficial application of exhaustive learning. It describes how a very large and fast Radial Basis Function (RBF) artificial Neural Network (NN) can be used to implement a useful computational system. Viewed another way, it presents an unusual method of transforming a complex, always-precise, and slow computational problem into a fuzzy pattern recognition problem where other methods are available to effectively improve computational performance. The method described recognizes that the need for computational precision in a problem domain sometimes varies throughout the domain's Feature Space (FS) and high precision may only be needed in limited areas. These observations can then be exploited to the benefit of overall computational performance. Addressing computational reliability, we describe how existing always-precise computational methods can be used to reliably train the NN to perform the computational interpolation function. The author recognizes that the method described is not applicable to every situation, but over the last 8 months we have been surprised at how often this method can be applied to enable interesting and effective solutions.

  9. A new formulation for feedforward neural networks.

    PubMed

    Razavi, Saman; Tolson, Bryan A

    2011-10-01

    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization. PMID:21859600

  10. Drift chamber tracking with neural networks

    SciTech Connect

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  11. Coherence resonance in bursting neural networks

    NASA Astrophysics Data System (ADS)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  12. From Classical Neural Networks to Quantum Neural Networks

    NASA Astrophysics Data System (ADS)

    Tirozzi, B.

    2013-09-01

    First I give a brief description of the classical Hopfield model introducing the fundamental concepts of patterns, retrieval, pattern recognition, neural dynamics, capacity and describe the fundamental results obtained in this field by Amit, Gutfreund and Sompolinsky,1 using the non rigorous method of replica and the rigorous version given by Pastur, Shcherbina, Tirozzi2 using the cavity method. Then I give a formulation of the theory of Quantum Neural Networks (QNN) in terms of the XY model with Hebbian interaction. The problem of retrieval and storage is discussed. The retrieval states are the states of the minimum energy. I apply the estimates found by Lieb3 which give lower and upper bound of the free-energy and expectation of the observables of the quantum model. I discuss also some experiment and the search of ground state using Monte Carlo Dynamics applied to the equivalent classical two dimensional Ising model constructed by Suzuki et al.6 At the end there is a list of open problems.

  13. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  14. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  15. Radiation Behavior of Analog Neural Network Chip

    NASA Technical Reports Server (NTRS)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  16. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. PMID:20713305

  17. Creativity in design and artificial neural networks

    SciTech Connect

    Neocleous, C.C.; Esat, I.I.; Schizas, C.N.

    1996-12-31

    The creativity phase is identified as an integral part of the design phase. The characteristics of creative persons which are relevant to designing artificial neural networks manifesting aspects of creativity, are identified. Based on these identifications, a general framework of artificial neural network characteristics to implement such a goal are proposed.

  18. Self-organization of neural networks

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Winston, Jeffrey V.; Rafelski, Johann

    1984-05-01

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (“brainwashing”) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conductive to the simulation of memory and learning phenomena.

  19. Advanced telerobotic control using neural networks

    NASA Technical Reports Server (NTRS)

    Pap, Robert M.; Atkins, Mark; Cox, Chadwick; Glover, Charles; Kissel, Ralph; Saeks, Richard

    1993-01-01

    Accurate Automation is designing and developing adaptive decentralized joint controllers using neural networks. We are then implementing these in hardware for the Marshall Space Flight Center PFMA as well as to be usable for the Remote Manipulator System (RMS) robot arm. Our design is being realized in hardware after completion of the software simulation. This is implemented using a Functional-Link neural network.

  20. Neural network based architectures for aerospace applications

    NASA Technical Reports Server (NTRS)

    Ricart, Richard

    1987-01-01

    A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed.

  1. Applications of Neural Networks in Finance.

    ERIC Educational Resources Information Center

    Crockett, Henry; Morrison, Ronald

    1994-01-01

    Discusses research with neural networks in the area of finance. Highlights include bond pricing, theoretical exposition of primary bond pricing, bond pricing regression model, and an example that created networks with corporate bonds and NeuralWare Neuralworks Professional H software using the back-propagation technique. (LRW)

  2. A Survey of Neural Network Publications.

    ERIC Educational Resources Information Center

    Vijayaraman, Bindiganavale S.; Osyk, Barbara

    This paper is a survey of publications on artificial neural networks published in business journals for the period ending July 1996. Its purpose is to identify and analyze trends in neural network research during that period. This paper shows which topics have been heavily researched, when these topics were researched, and how that research has…

  3. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  4. Relabeling exchange method (REM) for learning in neural networks

    NASA Astrophysics Data System (ADS)

    Wu, Wen; Mammone, Richard J.

    1994-02-01

    The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

  5. Application of Artificial Neural Networks for estimating index floods

    NASA Astrophysics Data System (ADS)

    Šimor, Viliam; Hlavčová, Kamila; Kohnová, Silvia; Szolgay, Ján

    2012-12-01

    This article presents an application of Artificial Neural Networks (ANNs) and multiple regression models for estimating mean annual maximum discharge (index flood) at ungauged sites. Both approaches were tested for 145 small basins in Slovakia in areas ranging from 20 to 300 km2. Using the objective clustering method, the catchments were divided into ten homogeneous pooling groups; for each pooling group, mutually independent predictors (catchment characteristics) were selected for both models. The neural network was applied as a simple multilayer perceptron with one hidden layer and with a back propagation learning algorithm. Hyperbolic tangents were used as an activation function in the hidden layer. Estimating index floods by the multiple regression models were based on deriving relationships between the index floods and catchment predictors. The efficiencies of both approaches were tested by the Nash-Sutcliffe and a correlation coefficients. The results showed the comparative applicability of both models with slightly better results for the index floods achieved using the ANNs methodology.

  6. Mammographic mass detection using wavelets as input to neural networks.

    PubMed

    Kilic, Niyazi; Gorgel, Pelin; Ucan, Osman N; Sertbas, Ahmet

    2010-12-01

    The objective of this paper is to demonstrate the utility of artificial neural networks, in combination with wavelet transforms for the detection of mammogram masses as malign or benign. A total of 45 patients who had breast masses in their mammography were enrolled in the study. The neural network was trained on the wavelet based feature vectors extracted from the mammogram masses for both benign and malign data. Therefore, in this study, Multilayer ANN was trained with the Backpropagation, Conjugate Gradient and Levenberg-Marquardt algorithms and ten-fold cross validation procedure was used. A satisfying sensitivity percentage of 89.2% was achieved with Levenberg-Marquardt algorithm. Since, this algorithm combines the best features of the Gauss-Newton technique and the other steepest-descent algorithms and thus it reaches desired results very fast. PMID:20703600

  7. An architecture for designing fuzzy logic controllers using neural networks

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1991-01-01

    Described here is an architecture for designing fuzzy controllers through a hierarchical process of control rule acquisition and by using special classes of neural network learning techniques. A new method for learning to refine a fuzzy logic controller is introduced. A reinforcement learning technique is used in conjunction with a multi-layer neural network model of a fuzzy controller. The model learns by updating its prediction of the plant's behavior and is related to the Sutton's Temporal Difference (TD) method. The method proposed here has the advantage of using the control knowledge of an experienced operator and fine-tuning it through the process of learning. The approach is applied to a cart-pole balancing system.

  8. Design of Jetty Piles Using Artificial Neural Networks

    PubMed Central

    2014-01-01

    To overcome the complication of jetty pile design process, artificial neural networks (ANN) are adopted. To generate the training samples for training ANN, finite element (FE) analysis was performed 50 times for 50 different design cases. The trained ANN was verified with another FE analysis case and then used as a structural analyzer. The multilayer neural network (MBPNN) with two hidden layers was used for ANN. The framework of MBPNN was defined as the input with the lateral forces on the jetty structure and the type of piles and the output with the stress ratio of the piles. The results from the MBPNN agree well with those from FE analysis. Particularly for more complex modes with hundreds of different design cases, the MBPNN would possibly substitute parametric studies with FE analysis saving design time and cost. PMID:25177724

  9. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. PMID:26700535

  10. Fast cosmological parameter estimation using neural networks

    NASA Astrophysics Data System (ADS)

    Auld, T.; Bridges, M.; Hobson, M. P.; Gull, S. F.

    2007-03-01

    We present a method for accelerating the calculation of cosmic microwave background (CMB) power spectra, matter power spectra and likelihood functions for use in cosmological parameter estimation. The algorithm, called COSMONET, is based on training a multilayer perceptron neural network and shares all the advantages of the recently released PICO algorithm of Fendt & Wandelt, but has several additional benefits in terms of simplicity, computational speed, memory requirements and ease of training. We demonstrate the capabilities of COSMONET by computing CMB power spectra over a box in the parameter space of flat Λ cold dark matter (ΛCDM) models containing the 3σ WMAP1-year confidence region. We also use COSMONET to compute the WMAP3-year (WMAP3) likelihood for flat ΛCDM models and show that marginalized posteriors on parameters derived are very similar to those obtained using CAMB and the WMAP3 code. We find that the average error in the power spectra is typically 2-3 per cent of cosmic variance, and that COSMONET is ~7 × 104 faster than CAMB (for flat models) and ~6 × 106 times faster than the official WMAP3 likelihood code. COSMONET and an interface to COSMOMC are publically available at http://www.mrao.cam.ac.uk/software/cosmonet.

  11. Neural Network Classifier Architectures for Phoneme Recognition. CRC Technical Note No. CRC-TN-92-001.

    ERIC Educational Resources Information Center

    Treurniet, William

    A study applied artificial neural networks, trained with the back-propagation learning algorithm, to modelling phonemes extracted from the DARPA TIMIT multi-speaker, continuous speech data base. A number of proposed network architectures were applied to the phoneme classification task, ranging from the simple feedforward multilayer network to more…

  12. Passive microwave relative humidity retrievals using feedforward neural networks

    SciTech Connect

    Cabrera-Mercader, C.R.; Staelin, D.H.

    1995-11-01

    A technique for retrieving atmospheric humidity profiles using passive microwave spectral observations from satellite and Multilayer Feedforward Neural Networks (MFNN) is introduced in this paper. Relative humidity retrievals on a global scale from simulated radiances at fifteen frequencies between 23.8 and 183.3 GHz yielded rms errors in relative humidity of 6--14% over ocean and 6--15% over land at pressure levels ranging from 131 mbar to 1,013 mbar. Comparison with a combined statistical and physical iterative retrieval scheme shows that superior retrievals can be obtained at a lower computational cost using MFNN.

  13. Recognition of chatter type based on improved neural network

    NASA Astrophysics Data System (ADS)

    Xie, Xiaozheng; Xie, Yongpeng; Zhao, Rongzhen; Jin, Wuyin; Yao, Yunping

    2013-03-01

    By studying chatter dynamic model, this paper discusses chatter phenomenon between metal cutting tool and workpiece during the cutting. From the point of energy, phase position difference of chatter mark, phase position difference of vibration mode, lagging phase position angle and change rate about cutting force relative to the cutting speed are respectively determined as characteristic parameter of regenerative, coupling vibration, lagging and fricative mode of chatter. With the four input parameters, multilayer feed forward neural network learning algorithm is used to diagnose the type of cutting chatter, and experiments show that this method is effective.It is essential to take appropriate measures on vibration suppression.

  14. Automatic Classification of Subdwarf Spectra using a Neural Network

    NASA Astrophysics Data System (ADS)

    Winter, C.; Jeffery, C. S.; Drilling, J. S.

    2004-06-01

    We apply a multilayer feed-forward back propagation artificial neural network to a sample of 380 subdwarf spectra classified by Drilling et al. (Drilling, J.S., Moehler, S., Jeffery, C.S., Heber, U., and Napiwotzki, R.: in press in: R. Gray (ed.), Probing the Personalities of Stars and Galaxies), showing that it is possible to use this technique on large sets of spectra and obtain classifications in good agreement with the standard. We briefly investigate the impact of training set size, showing that large training sets do not necessarily perform significantly better than small sets.

  15. Enhancing neural-network performance via assortativity.

    PubMed

    de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J

    2011-03-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information. PMID:21517565

  16. Enhancing neural-network performance via assortativity

    SciTech Connect

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-03-15

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  17. Neural network and letter recognition

    SciTech Connect

    Lee, Hue Yeon.

    1989-01-01

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C-layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the Gabor transform. Pattern dependent choice of center and wavelengths of Gabor filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets.

  18. Path optimisation of a mobile robot using an artificial neural network controller

    NASA Astrophysics Data System (ADS)

    Singh, M. K.; Parhi, D. R.

    2011-01-01

    This article proposed a novel approach for design of an intelligent controller for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real world dynamic environment. The inputs to the proposed neural controller consist of left, right and front obstacle distance with respect to its position and target angle. The output of the neural network is steering angle. A four layer neural network has been designed to solve the path and time optimisation problem of mobile robots, which deals with the cognitive tasks such as learning, adaptation, generalisation and optimisation. A back propagation algorithm is used to train the network. This article also analyses the kinematic design of mobile robots for dynamic movements. The simulation results are compared with experimental results, which are satisfactory and show very good agreement. The training of the neural nets and the control performance analysis has been done in a real experimental setup.

  19. VoIP attacks detection engine based on neural network

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Slachta, Jiri

    2015-05-01

    The security is crucial for any system nowadays, especially communications. One of the most successful protocols in the field of communication over IP networks is Session Initiation Protocol. It is an open-source project used by different kinds of applications, both open-source and proprietary. High penetration and text-based principle made SIP number one target in IP telephony infrastructure, so security of SIP server is essential. To keep up with hackers and to detect potential malicious attacks, security administrator needs to monitor and evaluate SIP traffic in the network. But monitoring and following evaluation could easily overwhelm the security administrator in networks, typically in networks with a number of SIP servers, users and logically or geographically separated networks. The proposed solution lies in automatic attack detection systems. The article covers detection of VoIP attacks through a distributed network of nodes. Then the gathered data analyze aggregation server with artificial neural network. Artificial neural network means multilayer perceptron network trained with a set of collected attacks. Attack data could also be preprocessed and verified with a self-organizing map. The source data is detected by distributed network of detection nodes. Each node contains a honeypot application and traffic monitoring mechanism. Aggregation of data from each node creates an input for neural networks. The automatic classification on a centralized server with low false positive detection reduce the cost of attack detection resources. The detection system uses modular design for easy deployment in final infrastructure. The centralized server collects and process detected traffic. It also maintains all detection nodes.

  20. Review On Applications Of Neural Network To Computer Vision

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  1. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. PMID:26547244

  2. Sunspot prediction using neural networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Baffes, Paul

    1990-01-01

    The earliest systematic observance of sunspot activity is known to have been discovered by the Chinese in 1382 during the Ming Dynasty (1368 to 1644) when spots on the sun were noticed by looking at the sun through thick, forest fire smoke. Not until after the 18th century did sunspot levels become more than a source of wonderment and curiosity. Since 1834 reliable sunspot data has been collected by the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Naval Observatory. Recently, considerable effort has been placed upon the study of the effects of sunspots on the ecosystem and the space environment. The efforts of the Artificial Intelligence Section of the Mission Planning and Analysis Division of the Johnson Space Center involving the prediction of sunspot activity using neural network technologies are described.

  3. Block-based neural networks.

    PubMed

    Moon, S W; Kong, S G

    2001-01-01

    This paper presents a novel block-based neural network (BBNN) model and the optimization of its structure and weights based on a genetic algorithm. The architecture of the BBNN consists of a 2D array of fundamental blocks with four variable input/output nodes and connection weights. Each block can have one of four different internal configurations depending on the structure settings, The BBNN model includes some restrictions such as 2D array and integer weights in order to allow easier implementation with reconfigurable hardware such as field programmable logic arrays (FPGA). The structure and weights of the BBNN are encoded with bit strings which correspond to the configuration bits of FPGA. The configuration bits are optimized globally using a genetic algorithm with 2D encoding and modified genetic operators. Simulations show that the optimized BBNN can solve engineering problems such as pattern classification and mobile robot control. PMID:18244385

  4. Wavelet differential neural network observer.

    PubMed

    Chairez, Isaac

    2009-09-01

    State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown. PMID:19674951

  5. Introduction to artificial neural networks.

    PubMed

    Grossi, Enzo; Buscema, Massimo

    2007-12-01

    The coupling of computer science and theoretical bases such as nonlinear dynamics and chaos theory allows the creation of 'intelligent' agents, such as artificial neural networks (ANNs), able to adapt themselves dynamically to problems of high complexity. ANNs are able to reproduce the dynamic interaction of multiple factors simultaneously, allowing the study of complexity; they can also draw conclusions on individual basis and not as average trends. These tools can offer specific advantages with respect to classical statistical techniques. This article is designed to acquaint gastroenterologists with concepts and paradigms related to ANNs. The family of ANNs, when appropriately selected and used, permits the maximization of what can be derived from available data and from complex, dynamic, and multidimensional phenomena, which are often poorly predictable in the traditional 'cause and effect' philosophy. PMID:17998827

  6. LavaNet—Neural network development environment in a general mine planning package

    NASA Astrophysics Data System (ADS)

    Kapageridis, Ioannis Konstantinou; Triantafyllou, A. G.

    2011-04-01

    LavaNet is a series of scripts written in Perl that gives access to a neural network simulation environment inside a general mine planning package. A well known and a very popular neural network development environment, the Stuttgart Neural Network Simulator, is used as the base for the development of neural networks. LavaNet runs inside VULCAN™—a complete mine planning package with advanced database, modelling and visualisation capabilities. LavaNet is taking advantage of VULCAN's Perl based scripting environment, Lava, to bring all the benefits of neural network development and application to geologists, mining engineers and other users of the specific mine planning package. LavaNet enables easy development of neural network training data sets using information from any of the data and model structures available, such as block models and drillhole databases. Neural networks can be trained inside VULCAN™ and the results be used to generate new models that can be visualised in 3D. Direct comparison of developed neural network models with conventional and geostatistical techniques is now possible within the same mine planning software package. LavaNet supports Radial Basis Function networks, Multi-Layer Perceptrons and Self-Organised Maps.

  7. Neural networks for damage identification

    SciTech Connect

    Paez, T.L.; Klenke, S.E.

    1997-11-01

    Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  8. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2003-12-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  9. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  10. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-03-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  11. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2002-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, online, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce {sub x} emissions and improve heat rate

  12. Fuzzy neural networks for classification and detection of anomalies.

    PubMed

    Meneganti, M; Saviello, F S; Tagliaferri, R

    1998-01-01

    In this paper, a new learning algorithm for the Simpson's fuzzy min-max neural network is presented. It overcomes some undesired properties of the Simpson's model: specifically, in it there are neither thresholds that bound the dimension of the hyperboxes nor sensitivity parameters. Our new algorithm improves the network performance: in fact, the classification result does not depend on the presentation order of the patterns in the training set, and at each step, the classification error in the training set cannot increase. The new neural model is particularly useful in classification problems as it is shown by comparison with some fuzzy neural nets cited in literature (Simpson's min-max model, fuzzy ARTMAP proposed by Carpenter, Grossberg et al. in 1992, adaptive fuzzy systems as introduced by Wang in his book) and the classical multilayer perceptron neural network with backpropagation learning algorithm. The tests were executed on three different classification problems: the first one with two-dimensional synthetic data, the second one with realistic data generated by a simulator to find anomalies in the cooling system of a blast furnace, and the third one with real data for industrial diagnosis. The experiments were made following some recent evaluation criteria known in literature and by using Microsoft Visual C++ development environment on personal computers. PMID:18255771

  13. A Neural Network Model for the Tomographic Analysis of Irradiated Nuclear Fuel Rods

    SciTech Connect

    Craciunescu, Teddy

    2004-04-15

    A tomographic method based on a multilayer feed-forward artificial neural network is proposed for the reconstruction of gamma-radioactive fission product distribution in irradiated nuclear fuel rods. The quality of the method is investigated as compared to a conventional technique on experimental results concerning a Canada deuterium uranium reactor (CANDU)-type fuel rod irradiated in a TRIGA reactor.

  14. Combinatorial evolution of regression nodes in feedforward neural networks.

    PubMed

    Schmitz, Gregor P.J.; Aldrich, Chris

    1999-01-01

    A number of techniques exist with which neural network architectures such as multilayer perceptrons and radial basis function networks can be trained. These include backpropagation, k-means clustering and evolutionary algorithms. The latter method is particularly useful as it is able to avoid local optima in the search space and can optimise parameters for which no gradient information exists. Unfortunately, only moderately sized networks can be trained by this method, owing to the fact that evolutionary optimisation is very computationally intensive. In this paper a novel algorithm (CERN) is therefore proposed which uses a special form of combinatorial search to optimise groups of neural nodes. Oriented, ellipsoidal basis nodes optimised with CERN achieved significantly better accuracy with fewer nodes than spherical basis nodes optimised by k-means clustering. Multilayer perceptrons optimised by CERN were found to be as accurate as those trained by advanced gradient descent techniques. CERN was also found to be significantly more efficient than a conventional evolutionary algorithm that does not use a combinatorial search. PMID:12662726

  15. VLSI Cells Placement Using the Neural Networks

    SciTech Connect

    Azizi, Hacene; Zouaoui, Lamri; Mokhnache, Salah

    2008-06-12

    The artificial neural networks have been studied for several years. Their effectiveness makes it possible to expect high performances. The privileged fields of these techniques remain the recognition and classification. Various applications of optimization are also studied under the angle of the artificial neural networks. They make it possible to apply distributed heuristic algorithms. In this article, a solution to placement problem of the various cells at the time of the realization of an integrated circuit is proposed by using the KOHONEN network.

  16. Neural networks and orbit control in accelerators

    SciTech Connect

    Bozoki, E.; Friedman, A.

    1994-07-01

    An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to `kicks` and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given.

  17. Stochastic cellular automata model of neural networks.

    PubMed

    Goltsev, A V; de Abreu, F V; Dorogovtsev, S N; Mendes, J F F

    2010-06-01

    We propose a stochastic dynamical model of noisy neural networks with complex architectures and discuss activation of neural networks by a stimulus, pacemakers, and spontaneous activity. This model has a complex phase diagram with self-organized active neural states, hybrid phase transitions, and a rich array of behaviors. We show that if spontaneous activity (noise) reaches a threshold level then global neural oscillations emerge. Stochastic resonance is a precursor of this dynamical phase transition. These oscillations are an intrinsic property of even small groups of 50 neurons. PMID:20866454

  18. Regional TEC mapping using neural networks

    NASA Astrophysics Data System (ADS)

    Yilmaz, A.; Akdogan, K. E.; Gurun, M.

    2009-06-01

    Characterization and modeling of ionospheric variability in space and time is very important for communications and navigation. To characterize the variations, the ionosphere should be monitored, and the sparsity of the measurements has to be compensated by interpolation algorithms. The total electron content (TEC) is a major parameter that can be used to obtain regional ionospheric maps. In this study, neural networks (NNs), specifically multilayer perceptrons (MLPs) and radial basis function networks (RBFN), are investigated for the merits of their nonlinear modeling capability. In order to assess the performance of MLP and RBFN structures with respect to mapping and ionospheric parameters, these algorithms are applied to synthetically generated TEC surfaces representing various ionospheric states. Synthetic TEC data are sampled homogenously and randomly for a varying number of data points. The reconstruction errors show that the performance improves significantly when homogenous sampling is preferred to random station distribution. The best MLP and RBFN structures for any possible realistic scenario are determined by examining the performance parameters for all possible cases. It is also observed that RBFN with local receptive fields relies more on the number of training data points. In contrast to RBFN, MLP as a global approximator depends strongly on ionospheric trends. Finally, chosen MLP and RBFN models are applied to a set of real GPS-TEC values obtained from central Europe, and their performances are compared with well known Global Ionospheric Maps produced by the International GNSS Service. The resolution and interpolation quality of the generated maps indicate that NNs offer a powerful and reliable alternative to the conventional TEC mapping algorithms.

  19. Optical implementation of a feature-based neural network with application to automatic target recognition

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1993-01-01

    An optical neural network based on the neocognitron paradigm is introduced. A novel aspect of the architecture design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by feeding back the ouput of the feature correlator interatively to the input spatial light modulator and by updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intraclass fault tolerance and interclass discrimination is achieved. A detailed system description is provided. Experimental demonstrations of a two-layer neural network for space-object discrimination is also presented.

  20. Automatic target recognition using a feature-based optical neural network

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1992-01-01

    An optical neural network based upon the Neocognitron paradigm (K. Fukushima et al. 1983) is introduced. A novel aspect of the architectural design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator and updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intra-class fault tolerance and inter-class discrimination is achieved. A detailed system description is provided. Experimental demonstration of a two-layer neural network for space objects discrimination is also presented.

  1. Experimental investigation of active vibration control using neural networks and piezoelectric actuators

    NASA Astrophysics Data System (ADS)

    Jha, Ratneshwar; Rower, Jacob

    2002-02-01

    The use of neural networks for identification and control of smart structures is investigated experimentally. Piezoelectric actuators are employed to suppress the vibrations of a cantilevered plate subject to impulse, sine wave and band-limited white noise disturbances. The neural networks used are multilayer perceptrons trained with error backpropagation. Validation studies show that the identifier predicts the system dynamics accurately. The controller is trained adaptively with the help of the neural identifier. Experimental results demonstrate excellent closed-loop performance and robustness of the neurocontroller.

  2. Neural network fusion capabilities for efficient implementation of tracking algorithms

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.; Amoozegar, Farid

    1996-05-01

    The ability to efficiently fuse information of different forms for facilitating intelligent decision-making is one of the major capabilities of trained multilayer neural networks that is being recognized int eh recent times. While development of innovative adaptive control algorithms for nonlinear dynamical plants which attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. In this paper we describe the capabilities and functionality of neural network algorithms for data fusion and implementation of nonlinear tracking filters. For a discussion of details and for serving as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes form the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. Such an approach results in an overall nonlinear tracking filter which has several advantages over the popular efforts at designing nonlinear estimation algorithms for tracking applications, the principle one being the reduction of mathematical and computational complexities. A system architecture that efficiently integrates the processing capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described in this paper.

  3. Neural network regulation driven by autonomous neural firings

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  4. Data compression using artificial neural networks

    SciTech Connect

    Watkins, B.E.

    1991-09-01

    This thesis investigates the application of artificial neural networks for the compression of image data. An algorithm is developed using the competitive learning paradigm which takes advantage of the parallel processing and classification capability of neural networks to produce an efficient implementation of vector quantization. Multi-Stage, tree searched, and classification vector quantization codebook design are adapted to the neural network design to reduce the computational cost and hardware requirements. The results show that the new algorithm provides a substantial reduction in computational costs and an improvement in performance.

  5. Description of interatomic interactions with neural networks

    NASA Astrophysics Data System (ADS)

    Hajinazar, Samad; Shao, Junping; Kolmogorov, Aleksey N.

    Neural networks are a promising alternative to traditional classical potentials for describing interatomic interactions. Recent research in the field has demonstrated how arbitrary atomic environments can be represented with sets of general functions which serve as an input for the machine learning tool. We have implemented a neural network formalism in the MAISE package and developed a protocol for automated generation of accurate models for multi-component systems. Our tests illustrate the performance of neural networks and known classical potentials for a range of chemical compositions and atomic configurations. Supported by NSF Grant DMR-1410514.

  6. Neural network with formed dynamics of activity

    SciTech Connect

    Dunin-Barkovskii, V.L.; Osovets, N.B.

    1995-03-01

    The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.

  7. Multispectral-image fusion using neural networks

    NASA Astrophysics Data System (ADS)

    Kagel, Joseph H.; Platt, C. A.; Donaven, T. W.; Samstad, Eric A.

    1990-08-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard a circuit card assembly and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations results and a description of the prototype system are presented. 1.

  8. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  9. Stock market index prediction using neural networks

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  10. Detection of systolic ejection click using time growing neural network.

    PubMed

    Gharehbaghi, Arash; Dutoit, Thierry; Ask, Per; Sörnmo, Leif

    2014-04-01

    In this paper, we present a novel neural network for classification of short-duration heart sounds: the time growing neural network (TGNN). The input to the network is the spectral power in adjacent frequency bands as computed in time windows of growing length. Children with heart systolic ejection click (SEC) and normal children are the two groups subjected to analysis. The performance of the TGNN is compared to that of a time delay neural network (TDNN) and a multi-layer perceptron (MLP), using training and test datasets of similar sizes with a total of 614 normal and abnormal cardiac cycles. From the test dataset, the classification rate/sensitivity is found to be 97.0%/98.1% for the TGNN, 85.1%/76.4% for the TDNN, and 92.7%/85.7% for the MLP. The results show that the TGNN performs better than do TDNN and MLP when frequency band power is used as classifier input. The performance of TGNN is also found to exhibit better immunity to noise. PMID:24613501

  11. A neural network prototyping package within IRAF

    NASA Technical Reports Server (NTRS)

    Bazell, D.; Bankman, I.

    1992-01-01

    We outline our plans for incorporating a Neural Network Prototyping Package into the IRAF environment. The package we are developing will allow the user to choose between different types of networks and to specify the details of the particular architecture chosen. Neural networks consist of a highly interconnected set of simple processing units. The strengths of the connections between units are determined by weights which are adaptively set as the network 'learns'. In some cases, learning can be a separate phase of the user cycle of the network while in other cases the network learns continuously. Neural networks have been found to be very useful in pattern recognition and image processing applications. They can form very general 'decision boundaries' to differentiate between objects in pattern space and they can be used for associative recall of patterns based on partial cures and for adaptive filtering. We discuss the different architectures we plan to use and give examples of what they can do.

  12. Evaluation of the efficiency of artificial neural networks for genetic value prediction.

    PubMed

    Silva, G N; Tomaz, R S; Sant'Anna, I C; Carneiro, V Q; Cruz, C D; Nascimento, M

    2016-01-01

    Artificial neural networks have shown great potential when applied to breeding programs. In this study, we propose the use of artificial neural networks as a viable alternative to conventional prediction methods. We conduct a thorough evaluation of the efficiency of these networks with respect to the prediction of breeding values. Therefore, we considered eight simulated scenarios, and for the purpose of genetic value prediction, seven statistical parameters in addition to the phenotypic mean in a network designed as a multilayer perceptron. After an evaluation of different network configurations, the results demonstrated the superiority of neural networks compared to estimation procedures based on linear models, and indicated high predictive accuracy and network efficiency. PMID:27051007

  13. Nonequilibrium landscape theory of neural networks

    PubMed Central

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  14. Results of the neural network investigation

    NASA Astrophysics Data System (ADS)

    Uvanni, Lee A.

    1992-04-01

    Rome Laboratory has designed and implemented a neural network based automatic target recognition (ATR) system under contract F30602-89-C-0079 with Booz, Allen & Hamilton (BAH), Inc., of Arlington, Virginia. The system utilizes a combination of neural network paradigms and conventional image processing techniques in a parallel environment on the IE- 2000 SUN 4 workstation at Rome Laboratory. The IE-2000 workstation was designed to assist the Air Force and Department of Defense to derive the needs for image exploitation and image exploitation support for the late 1990s - year 2000 time frame. The IE-2000 consists of a developmental testbed and an applications testbed, both with the goal of solving real world problems on real-world facilities for image exploitation. To fully exploit the parallel nature of neural networks, 18 Inmos T800 transputers were utilized, in an attempt to provide a near- linear speed-up for each subsystem component implemented on them. The initial design contained three well-known neural network paradigms, each modified by BAH to some extent: the Selective Attention Neocognitron (SAN), the Binary Contour System/Feature Contour System (BCS/FCS), and Adaptive Resonance Theory 2 (ART-2), and one neural network designed by BAH called the Image Variance Exploitation Network (IVEN). Through rapid prototyping, the initial system evolved into a completely different final design, called the Neural Network Image Exploitation System (NNIES), where the final system consists of two basic components: the Double Variance (DV) layer and the Multiple Object Detection And Location System (MODALS). A rapid prototyping neural network CAD Tool, designed by Booz, Allen & Hamilton, was used to rapidly build and emulate the neural network paradigms. Evaluation of the completed ATR system included probability of detections and probability of false alarms among other measures.

  15. Parameter extraction with neural networks

    NASA Astrophysics Data System (ADS)

    Cazzanti, Luca; Khan, Mumit; Cerrina, Franco

    1998-06-01

    In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs

  16. Neural networks for the optimization of crude oil blending.

    PubMed

    Yu, Wen; Morales, América

    2005-10-01

    Crude oil blending is an important unit in petroleum refining industry. Many blend automation systems use real-time optimizer (RTO), which apply current process information to update the model and predict the optimal operating policy. The key unites of the conventional RTO are on-line analyzers. Sometimes oil fields cannot apply these analyzers. In this paper, we propose an off-line optimization technique to overcome the main drawback of RTO. We use the history data to approximate the output of the on-line analyzers, then the desired optimal inlet flow rates are calculated by the optimization technique. After this off-line optimization, the inlet flow rates are used for on-line control, for example PID control, which forces the flow rate to follow the desired inlet flow rates. Neural networks are applied to model the blending process from the history data. The new optimization is carried out via the neural model. The contributions of this paper are: (1) Stable learning for the discrete-time multilayer neural network is proposed. (2) Sensitivity analysis of the neural optimization is given. (3) Real data of a oil field is used to show effectiveness of the proposed method. PMID:16278942

  17. Imbibition well stimulation via neural network design

    DOEpatents

    Weiss, William

    2007-08-14

    A method for stimulation of hydrocarbon production via imbibition by utilization of surfactants. The method includes use of fuzzy logic and neural network architecture constructs to determine surfactant use.

  18. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  19. Constructive Autoassociative Neural Network for Facial Recognition

    PubMed Central

    Fernandes, Bruno J. T.; Cavalcanti, George D. C.; Ren, Tsang I.

    2014-01-01

    Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature. PMID:25542018

  20. Radar signal categorization using a neural network

    NASA Technical Reports Server (NTRS)

    Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.

    1991-01-01

    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.

  1. Sign Language Recognition System using Neural Network for Digital Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Vargas, Lorena P.; Barba, Leiner; Torres, C. O.; Mattos, L.

    2011-01-01

    This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.

  2. Using neural networks in software repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  3. Limitations of opto-electronic neural networks

    NASA Technical Reports Server (NTRS)

    Yu, Jeffrey; Johnston, Alan; Psaltis, Demetri; Brady, David

    1989-01-01

    Consideration is given to the limitations of implementing neurons, weights, and connections in neural networks for electronics and optics. It is shown that the advantages of each technology are utilized when electronically fabricated neurons are included and a combination of optics and electronics are employed for the weights and connections. The relationship between the types of neural networks being constructed and the choice of technologies to implement the weights and connections is examined.

  4. Neural network simulations of the nervous system.

    PubMed

    van Leeuwen, J L

    1990-01-01

    Present knowledge of brain mechanisms is mainly based on anatomical and physiological studies. Such studies are however insufficient to understand the information processing of the brain. The present new focus on neural network studies is the most likely candidate to fill this gap. The present paper reviews some of the history and current status of neural network studies. It signals some of the essential problems for which answers have to be found before substantial progress in the field can be made. PMID:2245130

  5. Neural-Network Controller For Vibration Suppression

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Wang, Shyh Jong

    1995-01-01

    Neural-network-based adaptive-control system proposed for vibration suppression of flexible space structures. Controller features three-layer neural network and utilizes output feedback. Measurements generated by various sensors on structure. Feed forward path also included to speed up response in case plant exhibits predominantly linear dynamic behavior. System applicable to single-input single-output systems. Work extended to multiple-input multiple-output systems as well.

  6. Optimization neural network for solving flow problems.

    PubMed

    Perfetti, R

    1995-01-01

    This paper describes a neural network for solving flow problems, which are of interest in many areas of application as in fuel, hydro, and electric power scheduling. The neural network consist of two layers: a hidden layer and an output layer. The hidden units correspond to the nodes of the flow graph. The output units represent the branch variables. The network has a linear order of complexity, it is easily programmable, and it is suited for analog very large scale integration (VLSI) realization. The functionality of the proposed network is illustrated by a simulation example concerning the maximal flow problem. PMID:18263420

  7. A neural network simulation package in CLIPS

    NASA Technical Reports Server (NTRS)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  8. A unified neural-network-based speaker localization technique.

    PubMed

    Arslan, G; Sakarya, F A

    2000-01-01

    Locating and tracking a speaker in real time using microphone arrays is important in many applications such as hands-free video conferencing, speech processing in large rooms, and acoustic echo cancellation. A speaker can be moving from the far field to the near field of the array, or vice versa. Many neural-network-based localization techniques exist, but they are applicable to either far-field or near-field sources, and are computationally intensive for real-time speaker localization applications because of the wide-band nature of the speech. We propose a unified neural-network-based source localization technique, which is simultaneously applicable to wide-band and narrow-band signal sources that are in the far field or near field of a microphone array. The technique exploits a multilayer perceptron feedforward neural network structure and forms the feature vectors by computing the normalized instantaneous cross-power spectrum samples between adjacent pairs of sensors. Simulation results indicate that our technique is able to locate a source with an absolute error of less than 3.5 degrees at a signal-to-noise ratio of 20 dB and a sampling rate of 8000 Hz at each sensor. PMID:18249826

  9. Neural network application to aircraft control system design

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Garg, Sanjay; Merrill, Walter C.

    1991-01-01

    The feasibility of using artificial neural network as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research identified to enhance the practical applicability of neural networks to flight control design.

  10. Neural network identification of power system transfer functions

    SciTech Connect

    Gillard, D.M.; Bollinger, K.E.

    1996-03-01

    This paper describes an investigation into the use of a multilayered neural network for measuring the transfer function of a power system for use in power system stabilizer (PSS) tuning and assessing PSS damping. The objectives are to quickly and accurately measure the transfer function relating the electric power output to the AVR PSS reference voltage input of a system with the plant operating under normal conditions. In addition, the excitation signal used in the identification procedure is such that it will not adversely affect the terminal voltage or the system frequency. This research emphasized the development of a neural network that is easily trained and robust to changing system conditions. Performance studies of the trained neural network are described. Simulation studies suggest the practical feasibility of the algorithm as a stand-alone identification package and as a portion of a self-tuning algorithm requiring identification in the strategy. The same technique applied to a forward modeling scheme can be used to test the damping contribution from different control strategies.

  11. Neural network application to aircraft control system design

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Garg, Sanjay; Merrill, Walter C.

    1991-01-01

    The feasibility of using artificial neural networks as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research are identified to enhance the practical applicability of neural networks to flight control design.

  12. Neural Network Model For Fast Learning And Retrieval

    NASA Astrophysics Data System (ADS)

    Arsenault, Henri H.; Macukow, Bohdan

    1989-05-01

    An approach to learning in a multilayer neural network is presented. The proposed network learns by creating interconnections between the input layer and the intermediate layer. In one of the new storage prescriptions proposed, interconnections are excitatory (positive) only and the weights depend on the stored patterns. In the intermediate layer each mother cell is responsible for one stored pattern. Mutually interconnected neurons in the intermediate layer perform a winner-take-all operation, taking into account correlations between stored vectors. The performance of networks using this interconnection prescription is compared with two previously proposed schemes, one using inhibitory connections at the output and one using all-or-nothing interconnections. The network can be used as a content-addressable memory or as a symbolic substitution system that yields an arbitrarily defined output for any input. The training of a model to perform Boolean logical operations is also described. Computer simulations using the network as an autoassociative content-addressable memory show the model to be efficient. Content-addressable associative memories and neural logic modules can be combined to perform logic operations on highly corrupted data.

  13. Speech synthesis with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Weijters, Ton; Thole, Johan

    1992-10-01

    The application of neural nets to speech synthesis is considered. In speech synthesis, the main efforts so far have been to master the grapheme to phoneme conversion. During this conversion symbols (graphemes) are converted into other symbols (phonemes). Neural networks, however, are especially competitive for tasks in which complex nonlinear transformations are needed and sufficient domain specific knowledge is not available. The conversion of text into speech parameters appropriate as input for a speech generator seems such a task. Results of a pilot study in which an attempt is made to train a neural network for this conversion are presented.

  14. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  15. Evaluation of convolutional neural networks for visual recognition.

    PubMed

    Nebauer, C

    1998-01-01

    Convolutional neural networks provide an efficient method to constrain the complexity of feedforward neural networks by weight sharing and restriction to local connections. This network topology has been applied in particular to image classification when sophisticated preprocessing is to be avoided and raw images are to be classified directly. In this paper two variations of convolutional networks--neocognitron and a modification of neocognitron--are compared with classifiers based on fully connected feedforward layers (i.e., multilayer perceptron, nearest neighbor classifier, auto-encoding network) with respect to their visual recognition performance. Beside the original neocognitron a modification of the neocognitron is proposed which combines neurons from perceptron with the localized network structure of neocognitron. Instead of training convolutional networks by time-consuming error backpropagation, in this work a modular procedure is applied whereby layers are trained sequentially from the input to the output layer in order to recognize features of increasing complexity. For a quantitative experimental comparison with standard classifiers two very different recognition tasks have been chosen: handwritten digit recognition and face recognition. In the first example on handwritten digit recognition the generalization of convolutional networks is compared to fully connected networks. In several experiments the influence of variations of position, size, and orientation of digits is determined and the relation between training sample size and validation error is observed. In the second example recognition of human faces is investigated under constrained and variable conditions with respect to face orientation and illumination and the limitations of convolutional networks are discussed. PMID:18252491

  16. Applying backpropagation neural network in the control of medullary reflex pattern

    NASA Astrophysics Data System (ADS)

    Dalcin, Bruno Luiz Galluzzi; Cruz, Frederico Alan de Oliveira; Cortez, Célia Martins; Passos, Emmanuel Lopes

    2015-12-01

    We introduced in an artificial neural network (ANN) values of the data matrix that was built with results from simulations performed with the model for the control circuit of spinal reflex presented by Dalcin et al. (2005). Standard multi-layered feed-forward backpropagation network was used to train the ANNs. Results showed that the backpropagation ANN architecture supported the specific classificatory requirements of the study.

  17. The H1 neural network trigger project

    NASA Astrophysics Data System (ADS)

    Kiesling, C.; Denby, B.; Fent, J.; Fröchtenicht, W.; Garda, P.; Granado, B.; Grindhammer, G.; Haberer, W.; Janauschek, L.; Kobler, T.; Koblitz, B.; Nellen, G.; Prevotet, J.-C.; Schmidt, S.; Tzamariudaki, E.; Udluft, S.

    2001-08-01

    We present a short overview of neuromorphic hardware and some of the physics projects making use of such devices. As a concrete example we describe an innovative project within the H1-Experiment at the electron-proton collider HERA, instrumenting hardwired neural networks as pattern recognition machines to discriminate between wanted physics and uninteresting background at the trigger level. The decision time of the system is less than 20 microseconds, typical for a modern second level trigger. The neural trigger has been successfully running for the past four years and has turned out new physics results from H1 unobtainable so far with other triggering schemes. We describe the concepts and the technical realization of the neural network trigger system, present the most important physics results, and motivate an upgrade of the system for the future high luminosity running at HERA. The upgrade concentrates on "intelligent preprocessing" of the neural inputs which help to strongly improve the networks' discrimination power.

  18. Optical neural stimulation modeling on degenerative neocortical neural networks

    NASA Astrophysics Data System (ADS)

    Zverev, M.; Fanjul-Vélez, F.; Salas-García, I.; Arce-Diego, J. L.

    2015-07-01

    Neurodegenerative diseases usually appear at advanced age. Medical advances make people live longer and as a consequence, the number of neurodegenerative diseases continuously grows. There is still no cure for these diseases, but several brain stimulation techniques have been proposed to improve patients' condition. One of them is Optical Neural Stimulation (ONS), which is based on the application of optical radiation over specific brain regions. The outer cerebral zones can be noninvasively stimulated, without the common drawbacks associated to surgical procedures. This work focuses on the analysis of ONS effects in stimulated neurons to determine their influence in neuronal activity. For this purpose a neural network model has been employed. The results show the neural network behavior when the stimulation is provided by means of different optical radiation sources and constitute a first approach to adjust the optical light source parameters to stimulate specific neocortical areas.

  19. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  20. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  1. Neural networks forecast in small catchments with transfer of network parameters

    NASA Astrophysics Data System (ADS)

    Maca, P.; Havlicek, V.; Hermanovsky, M.; Horacek, S.; Pech, P.

    2009-04-01

    This contribution deals with neural network approach for short term forecast on small catchments. The applied methodology is based on theory of multilayer perceptron (MLP), feed forward neural network with back propagation optimization procedure was tested in order to explore the possibilities to transfer parameters between different catchments. Supervised optimization of network parameters and structure was investigated. A software tool was created for these research and operative purposes. The hourly discharges and rainfall data of real flood events served as an input to MLP. Seven catchments with areas, which range from 10 to 250 square kilometres and which are situated in the east part of the Czech Republic, were selected. The input data were normalized by parametric method. Variable configuration of neural network was tested in number of modes represented by different combination of learning and testing data sets. The analysis focuses on ability of the model to forecast the flood event with different peak discharge magnitudes. This should be achieved in both application steps - MLP learning and testing within given catchment and in step of parameter transfer of well learned network to another catchment. The length of prediction ranged from one hour to six hours ahead. The results showed that the model is capable to provide satisfying short term discharge forecast for the most of studied cases, including successful parameter transfer among different catchments. This was accomplished by using optimization of parameters which determine not only the structure and behaviour of applied network but also the transformation of input data.

  2. Fuzzy logic and neural networks

    SciTech Connect

    Loos, J.R.

    1994-11-01

    Combine fuzzy logic`s fuzzy sets, fuzzy operators, fuzzy inference, and fuzzy rules - like defuzzification - with neural networks and you can arrive at very unfuzzy real-time control. Fuzzy logic, cursed with a very whimsical title, simply means multivalued logic, which includes not only the conventional two-valued (true/false) crisp logic, but also the logic of three or more values. This means one can assign logic values of true, false, and somewhere in between. This is where fuzziness comes in. Multi-valued logic avoids the black-and-white, all-or-nothing assignment of true or false to an assertion. Instead, it permits the assignment of shades of gray. When assigning a value of true or false to an assertion, the numbers typically used are {open_quotes}1{close_quotes} or {open_quotes}0{close_quotes}. This is the case for programmed systems. If {open_quotes}0{close_quotes} means {open_quotes}false{close_quotes} and {open_quotes}1{close_quotes} means {open_quotes}true,{close_quotes} then {open_quotes}shades of gray{close_quotes} are any numbers between 0 and 1. Therefore, {open_quotes}nearly true{close_quotes} may be represented by 0.8 or 0.9, {open_quotes}nearly false{close_quotes} may be represented by 0.1 or 0.2, and {close_quotes}your guess is as good as mine{close_quotes} may be represented by 0.5. The flexibility available to one is limitless. One can associate any meaning, such as {open_quotes}nearly true{close_quotes}, to any value of any granularity, such as 0.9999. 2 figs.

  3. Cardiac Arrhythmias Classification Method Based on MUSIC, Morphological Descriptors, and Neural Network

    NASA Astrophysics Data System (ADS)

    Naghsh-Nilchi, Ahmad R.; Kadkhodamohammadi, A. Rahim

    2009-12-01

    An electrocardiogram (ECG) beat classification scheme based on multiple signal classification (MUSIC) algorithm, morphological descriptors, and neural networks is proposed for discriminating nine ECG beat types. These are normal, fusion of ventricular and normal, fusion of paced and normal, left bundle branch block, right bundle branch block, premature ventricular concentration, atrial premature contraction, paced beat, and ventricular flutter. ECG signal samples from MIT-BIH arrhythmia database are used to evaluate the scheme. MUSIC algorithm is used to calculate pseudospectrum of ECG signals. The low-frequency samples are picked to have the most valuable heartbeat information. These samples along with two morphological descriptors, which deliver the characteristics and features of all parts of the heart, form an input feature vector. This vector is used for the initial training of a classifier neural network. The neural network is designed to have nine sample outputs which constitute the nine beat types. Two neural network schemes, namely multilayered perceptron (MLP) neural network and a probabilistic neural network (PNN), are employed. The experimental results achieved a promising accuracy of 99.03% for classifying the beat types using MLP neural network. In addition, our scheme recognizes NORMAL class with 100% accuracy and never misclassifies any other classes as NORMAL.

  4. Combining neural networks and genetic algorithms for hydrological flow forecasting

    NASA Astrophysics Data System (ADS)

    Neruda, Roman; Srejber, Jan; Neruda, Martin; Pascenko, Petr

    2010-05-01

    We present a neural network approach to rainfall-runoff modeling for small size river basins based on several time series of hourly measured data. Different neural networks are considered for short time runoff predictions (from one to six hours lead time) based on runoff and rainfall data observed in previous time steps. Correlation analysis shows that runoff data, short time rainfall history, and aggregated API values are the most significant data for the prediction. Neural models of multilayer perceptron and radial basis function networks with different numbers of units are used and compared with more traditional linear time series predictors. Out of possible 48 hours of relevant history of all the input variables, the most important ones are selected by means of input filters created by a genetic algorithm. The genetic algorithm works with population of binary encoded vectors defining input selection patterns. Standard genetic operators of two-point crossover, random bit-flipping mutation, and tournament selection were used. The evaluation of objective function of each individual consists of several rounds of building and testing a particular neural network model. The whole procedure is rather computational exacting (taking hours to days on a desktop PC), thus a high-performance mainframe computer has been used for our experiments. Results based on two years worth data from the Ploucnice river in Northern Bohemia suggest that main problems connected with this approach to modeling are ovetraining that can lead to poor generalization, and relatively small number of extreme events which makes it difficult for a model to predict the amplitude of the event. Thus, experiments with both absolute and relative runoff predictions were carried out. In general it can be concluded that the neural models show about 5 per cent improvement in terms of efficiency coefficient over liner models. Multilayer perceptrons with one hidden layer trained by back propagation algorithm and

  5. On sparsely connected optimal neural networks

    SciTech Connect

    Beiu, V.; Draghici, S.

    1997-10-01

    This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.

  6. Artificial Neural Networks and Instructional Technology.

    ERIC Educational Resources Information Center

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  7. Neural-Network Modeling Of Arc Welding

    NASA Technical Reports Server (NTRS)

    Anderson, Kristinn; Barnett, Robert J.; Springfield, James F.; Cook, George E.; Strauss, Alvin M.; Bjorgvinsson, Jon B.

    1994-01-01

    Artificial neural networks considered for use in monitoring and controlling gas/tungsten arc-welding processes. Relatively simple network, using 4 welding equipment parameters as inputs, estimates 2 critical weld-bead paramaters within 5 percent. Advantage is computational efficiency.

  8. Higher-Order Neural Networks Recognize Patterns

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly; Ochoa, Ellen

    1996-01-01

    Networks of higher order have enhanced capabilities to distinguish between different two-dimensional patterns and to recognize those patterns. Also enhanced capabilities to "learn" patterns to be recognized: "trained" with far fewer examples and, therefore, in less time than necessary to train comparable first-order neural networks.

  9. Orthogonal Patterns In A Binary Neural Network

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1991-01-01

    Report presents some recent developments in theory of binary neural networks. Subject matter relevant to associate (content-addressable) memories and to recognition of patterns - both of considerable importance in advancement of robotics and artificial intelligence. When probed by any pattern, network converges to one of stored patterns.

  10. Comparing artificial and biological dynamical neural networks

    NASA Astrophysics Data System (ADS)

    McAulay, Alastair D.

    2006-05-01

    Modern computers can be made more friendly and otherwise improved by making them behave more like humans. Perhaps we can learn how to do this from biology in which human brains evolved over a long period of time. Therefore, we first explain a commonly used biological neural network (BNN) model, the Wilson-Cowan neural oscillator, that has cross-coupled excitatory (positive) and inhibitory (negative) neurons. The two types of neurons are used for frequency modulation communication between neurons which provides immunity to electromagnetic interference. We then evolve, for the first time, an artificial neural network (ANN) to perform the same task. Two dynamical feed-forward artificial neural networks use cross-coupling feedback (like that in a flip-flop) to form an ANN nonlinear dynamic neural oscillator with the same equations as the Wilson-Cowan neural oscillator. Finally we show, through simulation, that the equations perform the basic neural threshold function, switching between stable zero output and a stable oscillation, that is a stable limit cycle. Optical implementation with an injected laser diode and future research are discussed.

  11. Electronic device aspects of neural network memories

    NASA Technical Reports Server (NTRS)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  12. Drug-like and non drug-like pattern classification based on simple topology descriptor using hybrid neural network.

    PubMed

    Wan-Mamat, Wan Mohd Fahmi; Isa, Nor Ashidi Mat; Wahab, Habibah A; Wan-Mamat, Wan Mohd Fairuz

    2009-01-01

    An intelligent prediction system has been developed to discriminate drug-like and non drug-like molecules pattern. The system is constructed by using the application of advanced version of standard multilayer perceptron (MLP) neural network called Hybrid Multilayer Perceptron (HMLP) neural network and trained using Modified Recursive Prediction Error (MRPE) training algorithm. In this work, a well understood and easy excess Rule of Five + Veber filter properties are selected as the topological descriptor. The main idea behind the selection of this simple descriptor is to assure that the system could be used widely, beneficial and more advantageous regardless at all user level within a drug discovery organization. PMID:19964424

  13. Recursive least-squares learning algorithms for neural networks

    SciTech Connect

    Lewis, P.S. ); Hwang, Jenq-Neng . Dept. of Electrical Engineering)

    1990-01-01

    This paper presents the development of a pair of recursive least squares (RLS) algorithms for online training of multilayer perceptrons, which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation, either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is in the order of (N{sup 2}), where N is the number of network parameters. This is due to the estimation of the N {times} N inverse Hessian matrix. Less computationally intensive approximations of the RLS algorithms can be easily derived by using only block diagonal elements of this matrix, thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example, RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6331). 14 refs., 3 figs.

  14. Improving neural network performance on SIMD architectures

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  15. Segmentation of anatomical structures in x-ray computed tomography images using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Di; Valentino, Daniel J.

    2002-05-01

    Hierarchies of artificial neural networks(ANN's) were trained to segment regularly-shaped and constantly-located anatomical structures in x-ray computed tomography (CT) images. These neural networks learned to associate a point in an image with the anatomical structure containing the point using the image pixel intensity values located in a pattern around the point. The single layer ANN and the bilayer and multi-layer hierarchies of neural networks were developed and evaluated. The hierarchical Artificial Neural Networks(HANN's) consisted of a high-level ANN that identified large-scale anatomical structures (e.g., the head or chest), whose result was passed to a group of neural networks that identified smaller structures (e.g., the brain, sinus, soft tissue, skull, bone, or lung) within the large-scale structures. The ANN's were trained to segment and classify images based on different numbers of training images, numbers of sampling points per image, pixel intensity sampling patterns, hidden layer configuration. The experimental results indicate that multi-layer hierarchy of ANN's trained with data collected from multiple image series accurately classified anatomical structures in unknown chest and head CT images.

  16. Neural networks for control of NO{sub x} emissions in fossil plants

    SciTech Connect

    Reifman, J.; Feldman, E.E.

    1997-04-01

    We discuss the use of two classes of artificial neural networks, multilayer feedforward networks and fully-recurrent networks, in the development of a closed-loop controller for discrete-time dynamical systems. We apply the neural system to the control of oxides of nitrogen (NO{sub x}) emissions for a simplified representation of a furnace of a coal-fired fossil plant. Plant data from one of Commonwealth Edison`s fossil power plants were used to build a recurrent neural model of NO{sub x} formation which is then used in the training of the feedforward neural controller. Preliminary simulation results demonstrate the feasibility of the approach and additional tests with increasingly realistic models should be pursued.

  17. Artificial Neural Network Analysis in Preclinical Breast Cancer

    PubMed Central

    Motalleb, Gholamreza

    2014-01-01

    Objective: In this study, artificial neural network (ANN) analysis of virotherapy in preclinical breast cancer was investigated. Materials and Methods: In this research article, a multilayer feed-forward neural network trained with an error back-propagation algorithm was incorporated in order to develop a predictive model. The input parameters of the model were virus dose, week and tamoxifen citrate, while tumor weight was included in the output parameter. Two different training algorithms, namely quick propagation (QP) and Levenberg-Marquardt (LM), were used to train ANN. Results: The results showed that the LM algorithm, with 3-9-1 arrangement is more efficient compared to QP. Using LM algorithm, the coefficient of determination (R2) between the actual and predicted values was determined as 0.897118 for all data. Conclusion: It can be concluded that this ANN model may provide good ability to predict the biometry information of tumor in preclinical breast cancer virotherapy. The results showed that the LM algorithm employed by Neural Power software gave the better performance compared with the QP and virus dose, and it is more important factor compared to tamoxifen and time (week). PMID:24381857

  18. Rainfall-runoff modelling using artificial neural networks: comparison of network types

    NASA Astrophysics Data System (ADS)

    Senthil Kumar, A. R.; Sudheer, K. P.; Jain, S. K.; Agarwal, P. K.

    2005-04-01

    Growing interest in the use of artificial neural networks (ANNs) in rainfall-runoff modelling has suggested certain issues that are still not addressed properly. One such concern is the use of network type, as theoretical studies on a multi-layer perceptron (MLP) with a sigmoid transfer function enlightens certain limitations for its use. Alternatively, there is a strong belief in the general ANN user community that a radial basis function (RBF) network performs better than an MLP, as the former bases its nonlinearities on the training data set. This argument is not yet substantiated by applications in hydrology. This paper presents a comprehensive evaluation of the performance of MLP- and RBF-type neural network models developed for rainfall-runoff modelling of two Indian river basins. The performance of both the MLP and RBF network models were comprehensively evaluated in terms of their generalization properties, predicted hydrograph characteristics, and predictive uncertainty. The results of the study indicate that the choice of the network type certainly has an impact on the model prediction accuracy. The study suggests that both the networks have merits and limitations. For instance, the MLP requires a long trial-and-error procedure to fix the optimal number of hidden nodes, whereas for an RBF the structure of the network can be fixed using an appropriate training algorithm. However, a judgment on which is superior is not clearly possible from this study.

  19. The backpropagation algorithm in J, a fast prototyping tool for researching neural networks.

    PubMed

    Brouwer, R K

    1999-08-01

    This paper illustrates the use of a powerful language, called J, that is ideal for simulating neural networks. The use of J is demonstrated by its application to a gradient descent method for training a multilayer perceptron. It is also shown how the back-propagation algorithm can be easily generalized to multilayer networks without any increase in complexity and that the algorithm can be completely expressed in an array notation which is directly executable through J. J is a general purpose language, which means that its user is given a flexibility not available in neural network simulators or in software packages such as MATLAB. Yet, because of its numerous operators, J allows a very succinct code to be used, leading to a tremendous decrease in development time. PMID:10586987

  20. Design, stability and robustness analyses of neural networks in control systems

    NASA Astrophysics Data System (ADS)

    Shen, Jie

    1998-12-01

    Artificial Neural Network (ANN), also known as connectionist learning and parallel distributed processing, is finding its applications in diverse fields: many branches of engineering, health sciences, cognitive science, archaeology, finance, etc. This research tries to make some efforts to emphasize "design" methodology in ANN, and to explore the structures by which ANN can solve difficult problems by identifying proper ANN architecture. Two classes of ANN--multi-layer neural networks and recurrent networks--are investigated in the context of control of systems and estimation of unknown parameters. The multi-layer neural networks converge to optimal solutions by satisfying mathematical formulations associated with the Hamilton approach and the dynamic programming approach. A benchmark aerospace application is used for illustration. A variant of the Hopfield network, called the Modified Hopfield Neural Network (MHNN), is proposed to show the design approach to the determination of weights in recurrent networks. It is shown how the equilibrium point of this network helps with inversion operations arising in optimal gain determination. Control of dynamic systems using recurrent neural networks are presented. The robustness of the recurrent networks to parameter variation is considered in the context of weights. Analyses are carried out in the frequency domain and the time domain.

  1. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  2. A neural network approach to cloud classification

    NASA Technical Reports Server (NTRS)

    Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.

    1990-01-01

    It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.

  3. Neural network technologies for image classification

    NASA Astrophysics Data System (ADS)

    Korikov, A. M.; Tungusova, A. V.

    2015-11-01

    We analyze the classes of problems with an objective necessity to use neural network technologies, i.e. representation and resolution problems in the neural network logical basis. Among these problems, image recognition takes an important place, in particular the classification of multi-dimensional data based on information about textural characteristics. These problems occur in aerospace and seismic monitoring, materials science, medicine and other. We reviewed different approaches for the texture description: statistical, structural, and spectral. We developed a neural network technology for resolving a practical problem of cloud image classification for satellite snapshots from the spectroradiometer MODIS. The cloud texture is described by the statistical characteristics of the GLCM (Gray Level Co- Occurrence Matrix) method. From the range of neural network models that might be applied for image classification, we chose the probabilistic neural network model (PNN) and developed an implementation which performs the classification of the main types and subtypes of clouds. Also, we chose experimentally the optimal architecture and parameters for the PNN model which is used for image classification.

  4. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  5. Using neural networks for process planning

    NASA Astrophysics Data System (ADS)

    Huang, Samuel H.; Zhang, HongChao

    1995-08-01

    Process planning has been recognized as an interface between computer-aided design and computer-aided manufacturing. Since the late 1960s, computer techniques have been used to automate process planning activities. AI-based techniques are designed for capturing, representing, organizing, and utilizing knowledge by computers, and are extremely useful for automated process planning. To date, most of the AI-based approaches used in automated process planning are some variations of knowledge-based expert systems. Due to their knowledge acquisition bottleneck, expert systems are not sufficient in solving process planning problems. Fortunately, AI has developed other techniques that are useful for knowledge acquisition, e.g., neural networks. Neural networks have several advantages over expert systems that are desired in today's manufacturing practice. However, very few neural network applications in process planning have been reported. We present this paper in order to stimulate the research on using neural networks for process planning. This paper also identifies the problems with neural networks and suggests some possible solutions, which will provide some guidelines for research and implementation.

  6. Neural network training with global optimization techniques.

    PubMed

    Yamazaki, Akio; Ludermir, Teresa B

    2003-04-01

    This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks. PMID:12923920

  7. Stability of Stochastic Neutral Cellular Neural Networks

    NASA Astrophysics Data System (ADS)

    Chen, Ling; Zhao, Hongyong

    In this paper, we study a class of stochastic neutral cellular neural networks. By constructing a suitable Lyapunov functional and employing the nonnegative semi-martingale convergence theorem we give some sufficient conditions ensuring the almost sure exponential stability of the networks. The results obtained are helpful to design stability of networks when stochastic noise is taken into consideration. Finally, two examples are provided to show the correctness of our analysis.

  8. Flexible body control using neural networks

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  9. Application of Artificial Neural Networks in Differential Thermal Analysis of Inorganic Compounds

    NASA Astrophysics Data System (ADS)

    Ilgun, Ozlem; Beken, Murat; Alekberov, Vilayet; Ozcanli, Yesim

    2010-01-01

    Thermal decomposition of inorganic compounds have been analyzed by simultaneous differential thermal analysis (DTA) method. Also phase transitions and critical points have been investigated. Additionally a computer model based on backpropagation multilayer feed-forward artificial neural networks (ANNs) have been used for the stimulation and prediction of critical points and phase transitions of inorganic compounds. Experimental data and output values of artificial neural networks have been compared and ANN predictions showed a considerably good result due to some unjustified data values and ANN predictions concurred with each other.

  10. Can neural networks compete with process calculations

    SciTech Connect

    Blaesi, J.; Jensen, B.

    1992-12-01

    Neural networks have been called a real alternative to rigorous theoretical models. A theoretical model for the calculation of refinery coker naphtha end point and coker furnace oil 90% point already was in place on the combination tower of a coking unit. Considerable data had been collected on the theoretical model during the commissioning phase and benefit analysis of the project. A neural net developed for the coker fractionator has equalled the accuracy of theoretical models, and shown the capability to handle normal operating conditions. One disadvantage of a neural network is the amount of data needed to create a good model. Anywhere from 100 to thousands of cases are needed to create a neural network model. Overall, the correlation between theoretical and neural net models for both the coker naphtha end point and the coker furnace oil 90% point was about .80; the average deviation was about 4 degrees. This indicates that the neural net model was at least as capable as the theoretical model in calculating inferred properties. 3 figs.

  11. Development of Artificial Neural Network Model for Diesel Fuel Properties Prediction using Vibrational Spectroscopy.

    PubMed

    Bolanča, Tomislav; Marinović, Slavica; Ukić, Sime; Jukić, Ante; Rukavina, Vinko

    2012-06-01

    This paper describes development of artificial neural network models which can be used to correlate and predict diesel fuel properties from several FTIR-ATR absorbances and Raman intensities as input variables. Multilayer feed forward and radial basis function neural networks have been used to rapid and simultaneous prediction of cetane number, cetane index, density, viscosity, distillation temperatures at 10% (T10), 50% (T50) and 90% (T90) recovery, contents of total aromatics and polycyclic aromatic hydrocarbons of commercial diesel fuels. In this study two-phase training procedures for multilayer feed forward networks were applied. While first phase training algorithm was constantly the back propagation one, two second phase training algorithms were varied and compared, namely: conjugate gradient and quasi Newton. In case of radial basis function network, radial layer was trained using K-means radial assignment algorithm and three different radial spread algorithms: explicit, isotropic and K-nearest neighbour. The number of hidden layer neurons and experimental data points used for the training set have been optimized for both neural networks in order to insure good predictive ability by reducing unnecessary experimental work. This work shows that developed artificial neural network models can determine main properties of diesel fuels simultaneously based on a single and fast IR or Raman measurement. PMID:24061237

  12. Artificial neural networks for small dataset analysis.

    PubMed

    Pasini, Antonello

    2015-05-01

    Artificial neural networks (ANNs) are usually considered as tools which can help to analyze cause-effect relationships in complex systems within a big-data framework. On the other hand, health sciences undergo complexity more than any other scientific discipline, and in this field large datasets are seldom available. In this situation, I show how a particular neural network tool, which is able to handle small datasets of experimental or observational data, can help in identifying the main causal factors leading to changes in some variable which summarizes the behaviour of a complex system, for instance the onset of a disease. A detailed description of the neural network tool is given and its application to a specific case study is shown. Recommendations for a correct use of this tool are also supplied. PMID:26101654

  13. Kannada character recognition system using neural network

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  14. Critical and resonance phenomena in neural networks

    NASA Astrophysics Data System (ADS)

    Goltsev, A. V.; Lopes, M. A.; Lee, K.-E.; Mendes, J. F. F.

    2013-01-01

    Brain rhythms contribute to every aspect of brain function. Here, we study critical and resonance phenomena that precede the emergence of brain rhythms. Using an analytical approach and simulations of a cortical circuit model of neural networks with stochastic neurons in the presence of noise, we show that spontaneous appearance of network oscillations occurs as a dynamical (non-equilibrium) phase transition at a critical point determined by the noise level, network structure, the balance between excitatory and inhibitory neurons, and other parameters. We find that the relaxation time of neural activity to a steady state, response to periodic stimuli at the frequency of the oscillations, amplitude of damped oscillations, and stochastic fluctuations of neural activity are dramatically increased when approaching the critical point of the transition.

  15. Artificial neural networks for small dataset analysis

    PubMed Central

    2015-01-01

    Artificial neural networks (ANNs) are usually considered as tools which can help to analyze cause-effect relationships in complex systems within a big-data framework. On the other hand, health sciences undergo complexity more than any other scientific discipline, and in this field large datasets are seldom available. In this situation, I show how a particular neural network tool, which is able to handle small datasets of experimental or observational data, can help in identifying the main causal factors leading to changes in some variable which summarizes the behaviour of a complex system, for instance the onset of a disease. A detailed description of the neural network tool is given and its application to a specific case study is shown. Recommendations for a correct use of this tool are also supplied. PMID:26101654

  16. Competitive dynamics of lexical innovations in multi-layer networks

    NASA Astrophysics Data System (ADS)

    Javarone, Marco Alberto

    2014-04-01

    We study the introduction of lexical innovations into a community of language users. Lexical innovations, i.e. new term added to people's vocabulary, plays an important role in the process of language evolution. Nowadays, information is spread through a variety of networks, including, among others, online and offline social networks and the World Wide Web. The entire system, comprising networks of different nature, can be represented as a multi-layer network. In this context, lexical innovations diffusion occurs in a peculiar fashion. In particular, a lexical innovation can undergo three different processes: its original meaning is accepted; its meaning can be changed or misunderstood (e.g. when not properly explained), hence more than one meaning can emerge in the population. Lastly, in the case of a loan word, it can be translated into the population language (i.e. defining a new lexical innovation or using a synonym) or into a dialect spoken by part of the population. Therefore, lexical innovations cannot be considered simply as information. We develop a model for analyzing this scenario using a multi-layer network comprising a social network and a media network. The latter represents the set of all information systems of a society, e.g. television, the World Wide Web and radio. Furthermore, we identify temporal directed edges between the nodes of these two networks. In particular, at each time-step, nodes of the media network can be connected to randomly chosen nodes of the social network and vice versa. In doing so, information spreads through the whole system and people can share a lexical innovation with their neighbors or, in the event they work as reporters, by using media nodes. Lastly, we use the concept of "linguistic sign" to model lexical innovations, showing its fundamental role in the study of these dynamics. Many numerical simulations have been performed to analyze the proposed model and its outcomes.

  17. Pattern classification and associative recall by neural networks

    SciTech Connect

    Chiueh, Tzi-Dar.

    1989-01-01

    The first part of this dissertation discusses a new classifier based on a multilayer feed-forward network architecture. The main idea is to map irregularly-distributed prototypes in a classification problem to codewords that are organized in some way. Then the pattern classification problem is transformed into a threshold decoding problem, which is easily solved using simple hard-limiter neurons. At first the author proposes the new model and introduce two families of good internal representation codes. Then some analyses and software simulation concerning the storage capacity of this new model are done. The results show that the new classifier is much better than the classifier based on the Hopfield model in terms of both the storage capacity and the ability to classify correlated prototypes. A general model for neural network associative memories with a feedback structure is proposed. Many existing neural network associative memories can be expressed as special cases of this general model. Among these models, there is a class of associative memories, called correlation associative memories, that are capable of storing a large number of memory patterns. If the function used in the evolution equation is monotonically nondecreasing, then a correlation associative memory can be proved to be asymptotically stable in both the synchronous and asynchronous updating modes. Of these correlation associative memories, one stands out because of its VLSI implementation feasibility and large storage capacity. This memory uses the exponentiation function in its evolution equation; hence it is called exponential correlation associative memory (ECAM).

  18. Classification of Images Acquired with Colposcopy Using Artificial Neural Networks

    PubMed Central

    Simões, Priscyla W; Izumi, Narjara B; Casagrande, Ramon S; Venson, Ramon; Veronezi, Carlos D; Moretti, Gustavo P; da Rocha, Edroaldo L; Cechinel, Cristian; Ceretta, Luciane B; Comunello, Eros; Martins, Paulo J; Casagrande, Rogério A; Snoeyer, Maria L; Manenti, Sandra A

    2014-01-01

    OBJECTIVE To explore the advantages of using artificial neural networks (ANNs) to recognize patterns in colposcopy to classify images in colposcopy. PURPOSE Transversal, descriptive, and analytical study of a quantitative approach with an emphasis on diagnosis. The training test e validation set was composed of images collected from patients who underwent colposcopy. These images were provided by a gynecology clinic located in the city of Criciúma (Brazil). The image database (n = 170) was divided; 48 images were used for the training process, 58 images were used for the tests, and 64 images were used for the validation. A hybrid neural network based on Kohonen self-organizing maps and multilayer perceptron (MLP) networks was used. RESULTS After 126 cycles, the validation was performed. The best results reached an accuracy of 72.15%, a sensibility of 69.78%, and a specificity of 68%. CONCLUSION Although the preliminary results still exhibit an average efficiency, the present approach is an innovative and promising technique that should be deeply explored in the context of the present study. PMID:25374454

  19. Determination of the elastic constants of a composite plate using wavelet transforms and neural networks

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Cheng, Jianchun; Berthelot, Yves H.

    2002-03-01

    An inverse method based on a combination of the wavelet transform and artificial neural networks is presented. The method is used to recover the elastic constants of a fiber-reinforced composite plate from experimental measurements of ultrasonic Lamb waves generated and detected with lasers. In this method, the elastic constants are not recovered from the dispersion curves but rather directly from the measured waveforms. Transient waveforms obtained by numerical simulations for different elastic constants are used as input to train the neural network. The wavelet transform is used to extract the eigenvectors from the Lamb wave signals to simplify the structure of the neutral network. The eigenvectors are then introduced into a multilayer internally recurrent neural network with a back-propagation algorithm. Finally, experimental waveforms recoded on a titanium-graphite composite plate are used as input to recover the elastic constants of the material.

  20. Learning and coordinating in a multilayer network.

    PubMed

    Lugo, Haydée; San Miguel, Maxi

    2015-01-01

    We introduce a two layer network model for social coordination incorporating two relevant ingredients: a) different networks of interaction to learn and to obtain a pay-off, and b) decision making processes based both on social and strategic motivations. Two populations of agents are distributed in two layers with intralayer learning processes and playing interlayer a coordination game. We find that the skepticism about the wisdom of crowd and the local connectivity are the driving forces to accomplish full coordination of the two populations, while polarized coordinated layers are only possible for all-to-all interactions. Local interactions also allow for full coordination in the socially efficient Pareto-dominant strategy in spite of being the riskier one. PMID:25585934

  1. Learning and coordinating in a multilayer network

    PubMed Central

    Lugo, Haydée; Miguel, Maxi San

    2015-01-01

    We introduce a two layer network model for social coordination incorporating two relevant ingredients: a) different networks of interaction to learn and to obtain a pay-off, and b) decision making processes based both on social and strategic motivations. Two populations of agents are distributed in two layers with intralayer learning processes and playing interlayer a coordination game. We find that the skepticism about the wisdom of crowd and the local connectivity are the driving forces to accomplish full coordination of the two populations, while polarized coordinated layers are only possible for all-to-all interactions. Local interactions also allow for full coordination in the socially efficient Pareto-dominant strategy in spite of being the riskier one. PMID:25585934

  2. Signal dispersion within a hippocampal neural network

    NASA Technical Reports Server (NTRS)

    Horowitz, J. M.; Mates, J. W. B.

    1975-01-01

    A model network is described, representing two neural populations coupled so that one population is inhibited by activity it excites in the other. Parameters and operations within the model represent EPSPs, IPSPs, neural thresholds, conduction delays, background activity and spatial and temporal dispersion of signals passing from one population to the other. Simulations of single-shock and pulse-train driving of the network are presented for various parameter values. Neuronal events from 100 to 300 msec following stimulation are given special consideration in model calculations.

  3. The identification of pitting and crevice corrosion spectra in electrochemical noise using an artificial neural network

    SciTech Connect

    Barton, T.F.; Tuck, D.L.; Wells, D.B.

    1996-12-31

    An artificial neural network has been developed to identify the onset and classify the type of localized corrosion from electrochemical noise spectra. The multilayer feedforward (MLF) network was trained by classical back-propagation to identify corrosion from the characteristics of the initial current ramp. Using 50 training files and 39 test files taken from measurements on Type 304 stainless steel in a dilute chloride electrolyte, the network accurately detected and classified 96% of the data and reported no misclassifications. Experiments with high levels of adventitious noise superimposed on the original data have been carried out to examine the noise tolerance of the network.

  4. Synchronous machine steady-state stability analysis using an artificial neural network

    SciTech Connect

    Chen, C.R.; Hsu, Y.Y. . Dept. of Electrical Engineering)

    1991-03-01

    A new type of artificial neural network is proposed for the steady-state stability analysis of a synchronous generator. In the developed artificial neutral network, those system variables which play an important role in steady-state stability such as generator outputs and power system stabilizer parameters are employed as the inputs. The output of the neural net provides the information on steady-state stability. Once the connection weights of the neural network have been learned using a set of training data derived off-line, the neural net can be applied to analyze the steady-state stability of the system time. To demonstrate the effectiveness of the proposed neural net, steady-state stability analysis is performed on a synchronous generator connected to a large power system. It is found that the proposed neural net requires much less training time than the multilayer feedforward network with backpropagation-momentum learning algorithm. It is also concluded from the test results that correct stability assessment can be achieved by the neural network.

  5. Autonomous robot behavior based on neural networks

    NASA Astrophysics Data System (ADS)

    Grolinger, Katarina; Jerbic, Bojan; Vranjes, Bozo

    1997-04-01

    The purpose of autonomous robot is to solve various tasks while adapting its behavior to the variable environment, expecting it is able to navigate much like a human would, including handling uncertain and unexpected obstacles. To achieve this the robot has to be able to find solution to unknown situations, to learn experienced knowledge, that means action procedure together with corresponding knowledge on the work space structure, and to recognize working environment. The planning of the intelligent robot behavior presented in this paper implements the reinforcement learning based on strategic and random attempts for finding solution and neural network approach for memorizing and recognizing work space structure (structural assignment problem). Some of the well known neural networks based on unsupervised learning are considered with regard to the structural assignment problem. The adaptive fuzzy shadowed neural network is developed. It has the additional shadowed hidden layer, specific learning rule and initialization phase. The developed neural network combines advantages of networks based on the Adaptive Resonance Theory and using shadowed hidden layer provides ability to recognize lightly translated or rotated obstacles in any direction.

  6. On-line learning algorithms for locally recurrent neural networks.

    PubMed

    Campolucci, P; Uncini, A; Piazza, F; Rao, B D

    1999-01-01

    This paper focuses on on-line learning procedures for locally recurrent neural networks with emphasis on multilayer perceptron (MLP) with infinite impulse response (IIR) synapses and its variations which include generalized output and activation feedback multilayer networks (MLN's). We propose a new gradient-based procedure called recursive backpropagation (RBP) whose on-line version, causal recursive backpropagation (CRBP), presents some advantages with respect to the other on-line training methods. The new CRBP algorithm includes as particular cases backpropagation (BP), temporal backpropagation (TBP), backpropagation for sequences (BPS), Back-Tsoi algorithm among others, thereby providing a unifying view on gradient calculation techniques for recurrent networks with local feedback. The only learning method that has been proposed for locally recurrent networks with no architectural restriction is the one by Back and Tsoi. The proposed algorithm has better stability and higher speed of convergence with respect to the Back-Tsoi algorithm, which is supported by the theoretical development and confirmed by simulations. The computational complexity of the CRBP is comparable with that of the Back-Tsoi algorithm, e.g., less that a factor of 1.5 for usual architectures and parameter settings. The superior performance of the new algorithm, however, easily justifies this small increase in computational burden. In addition, the general paradigms of truncated BPTT and RTRL are applied to networks with local feedback and compared with the new CRBP method. The simulations show that CRBP exhibits similar performances and the detailed analysis of complexity reveals that CRBP is much simpler and easier to implement, e.g., CRBP is local in space and in time while RTRL is not local in space. PMID:18252525

  7. Experimental fault characterization of a neural network

    NASA Technical Reports Server (NTRS)

    Tan, Chang-Huong

    1990-01-01

    The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.

  8. A neural network with modular hierarchical learning

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)

    1994-01-01

    This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.

  9. Inflow forecasting using Artificial Neural Networks for reservoir operation

    NASA Astrophysics Data System (ADS)

    Chiamsathit, Chuthamat; Adeloye, Adebayo J.; Bankaru-Swamy, Soundharajan

    2016-05-01

    In this study, multi-layer perceptron (MLP) artificial neural networks have been applied to forecast one-month-ahead inflow for the Ubonratana reservoir, Thailand. To assess how well the forecast inflows have performed in the operation of the reservoir, simulations were carried out guided by the systems rule curves. As basis of comparison, four inflow situations were considered: (1) inflow known and assumed to be the historic (Type A); (2) inflow known and assumed to be the forecast (Type F); (3) inflow known and assumed to be the historic mean for month (Type M); and (4) inflow is unknown with release decision only conditioned on the starting reservoir storage (Type N). Reservoir performance was summarised in terms of reliability, resilience, vulnerability and sustainability. It was found that Type F inflow situation produced the best performance while Type N was the worst performing. This clearly demonstrates the importance of good inflow information for effective reservoir operation.

  10. APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION

    SciTech Connect

    Musson, John C.; Seaton, Chad; Spata, Mike F.; Yan, Jianxun

    2012-11-01

    Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an activation layer, is responsible for the removal of saturation effects. Implementation of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.

  11. Development of programmable artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J.

    1993-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  12. Neural networks for feedback feedforward nonlinear control systems.

    PubMed

    Parisini, T; Zoppoli, R

    1994-01-01

    This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method. PMID:18267810

  13. Prediction of friction factor of pure water flowing inside vertical smooth and microfin tubes by using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Çebi, A.; Akdoğan, E.; Celen, A.; Dalkilic, A. S.

    2016-06-01

    An artificial neural network (ANN) model of friction factor in smooth and microfin tubes under heating, cooling and isothermal conditions was developed in this study. Data used in ANN was taken from a vertically positioned heat exchanger experimental setup. Multi-layered feed-forward neural network with backpropagation algorithm, radial basis function networks and hybrid PSO-neural network algorithm were applied to the database. Inputs were the ratio of cross sectional flow area to hydraulic diameter, experimental condition number depending on isothermal, heating, or cooling conditions and mass flow rate while the friction factor was the output of the constructed system. It was observed that such neural network based system could effectively predict the friction factor values of the flows regardless of their tube types. A dependency analysis to determine the strongest parameter that affected the network and database was also performed and tube geometry was found to be the strongest parameter of all as a result of analysis.

  14. Auto-associative nanoelectronic neural network

    SciTech Connect

    Nogueira, C. P. S. M.; Guimarães, J. G.

    2014-05-15

    In this paper, an auto-associative neural network using single-electron tunneling (SET) devices is proposed and simulated at low temperature. The nanoelectronic auto-associative network is able to converge to a stable state, previously stored during training. The recognition of the pattern involves decreasing the energy of the input state until it achieves a point of local minimum energy, which corresponds to one of the stored patterns.

  15. Constructive approximate interpolation by neural networks

    NASA Astrophysics Data System (ADS)

    Llanas, B.; Sainz, F. J.

    2006-04-01

    We present a type of single-hidden layer feedforward neural networks with sigmoidal nondecreasing activation function. We call them ai-nets. They can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can uniformly approximate any continuous function of one variable and can be used for constructing uniform approximants of continuous functions of several variables. All these capabilities are based on a closed expression of the networks.

  16. Digital Neural Networks for New Media

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Lambert; Malki, Suleyman

    Neural Networks perform computationally intensive tasks offering smart solutions for many new media applications. A number of analog and mixed digital/analog implementations have been proposed to smooth the algorithmic gap. But gradually, the digital implementation has become feasible, and the dedicated neural processor is on the horizon. A notable example is the Cellular Neural Network (CNN). The analog direction has matured for low-power, smart vision sensors; the digital direction is gradually being shaped into an IP-core for algorithm acceleration, especially for use in FPGA-based high-performance systems. The chapter discusses the next step towards a flexible and scalable multi-core engine using Application-Specific Integrated Processors (ASIP). This topographic engine can serve many new media tasks, as illustrated by novel applications in Homeland Security. We conclude with a view on the CNN kaleidoscope for the year 2020.

  17. Immunization strategy for epidemic spreading on multilayer networks

    NASA Astrophysics Data System (ADS)

    Buono, C.; Braunstein, L. A.

    2015-01-01

    In many real-world complex systems, individuals have many kinds of interactions among them, suggesting that it is necessary to consider a layered-structure framework to model systems such as social interactions. This structure can be captured by multilayer networks and can have major effects on the spreading of process that occurs over them, such as epidemics. In this letter we study a targeted immunization strategy for epidemic spreading over a multilayer network. We apply the strategy in one of the layers and study its effect in all layers of the network disregarding degree-degree correlation among layers. We found that the targeted strategy is not as efficient as in isolated networks, due to the fact that in order to stop the spreading of the disease it is necessary to immunize more than 80% of the individuals. However, the size of the epidemic is drastically reduced in the layer where the immunization strategy is applied compared to the case with no mitigation strategy. Thus, the immunization strategy has a major effect on the layer were it is applied, but does not efficiently protect the individuals of other layers.

  18. Optoelectronic Integrated Circuits For Neural Networks

    NASA Technical Reports Server (NTRS)

    Psaltis, D.; Katz, J.; Kim, Jae-Hoon; Lin, S. H.; Nouhi, A.

    1990-01-01

    Many threshold devices placed on single substrate. Integrated circuits containing optoelectronic threshold elements developed for use as planar arrays of artificial neurons in research on neural-network computers. Mounted with volume holograms recorded in photorefractive crystals serving as dense arrays of variable interconnections between neurons.

  19. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  20. Active Sampling in Evolving Neural Networks.

    ERIC Educational Resources Information Center

    Parisi, Domenico

    1997-01-01

    Comments on Raftopoulos article (PS 528 649) on facilitative effect of cognitive limitation in development and connectionist models. Argues that the use of neural networks within an "Artificial Life" perspective can more effectively contribute to the study of the role of cognitive limitations in development and their genetic basis than can using…

  1. Localizing Tortoise Nests by Neural Networks

    PubMed Central

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  2. Neural network application to comprehensive engine diagnostics

    NASA Technical Reports Server (NTRS)

    Marko, Kenneth A.

    1994-01-01

    We have previously reported on the use of neural networks for detection and identification of faults in complex microprocessor controlled powertrain systems. The data analyzed in those studies consisted of the full spectrum of signals passing between the engine and the real-time microprocessor controller. The specific task of the classification system was to classify system operation as nominal or abnormal and to identify the fault present. The primary concern in earlier work was the identification of faults, in sensors or actuators in the powertrain system as it was exercised over its full operating range. The use of data from a variety of sources, each contributing some potentially useful information to the classification task, is commonly referred to as sensor fusion and typifies the type of problems successfully addressed using neural networks. In this work we explore the application of neural networks to a different diagnostic problem, the diagnosis of faults in newly manufactured engines and the utility of neural networks for process control.

  3. Localizing Tortoise Nests by Neural Networks.

    PubMed

    Barbuti, Roberto; Chessa, Stefano; Micheli, Alessio; Pucci, Rita

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  4. Nonlinear Time Series Analysis via Neural Networks

    NASA Astrophysics Data System (ADS)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  5. Negative transfer problem in neural networks

    NASA Astrophysics Data System (ADS)

    Abunawass, Adel M.

    1992-07-01

    Harlow, 1949, observed that when human subjects were trained to perform simple discrimination tasks over a sequence of successive training sessions (trials), their performance improved as a function of the successive sessions. Harlow called this phenomena `learning-to- learn.' The subjects acquired knowledge and improved their ability to learn in future training sessions. It seems that previous training sessions contribute positively to the current one. Abunawass & Maki, 1989, observed that when a neural network (using the back-propagation model) is trained over successive sessions, the performance and learning ability of the network degrade as a function of the training sessions. In some cases this leads to a complete paralysis of the network. Abunawass & Maki called this phenomena the `negative transfer' problem, since previous training sessions contribute negatively to the current one. The effect of the negative transfer problem is in clear contradiction to that reported by Harlow on human subjects. Since the ability to model human cognition and learning is one of the most important goals (and claims) of neural networks, the negative transfer problem represents a clear limitation to this ability. This paper describes a new neural network sequential learning model known as Adaptive Memory Consolidation. In this model the network uses its past learning experience to enhance its future learning ability. Adaptive Memory Consolidation has led to the elimination and reversal of the effect of the negative transfer problem. Thus producing a `positive transfer' effect similar to Harlow's learning-to-learn phenomena.

  6. Foetal ECG recovery using dynamic neural networks.

    PubMed

    Camps-Valls, Gustavo; Martínez-Sober, Marcelino; Soria-Olivas, Emilio; Magdalena-Benedito, Rafael; Calpe-Maravilla, Javier; Guerrero-Martínez, Juan

    2004-07-01

    Non-invasive electrocardiography has proven to be a very interesting method for obtaining information about the foetus state and thus to assure its well-being during pregnancy. One of the main applications in this field is foetal electrocardiogram (ECG) recovery by means of automatic methods. Evident problems found in the literature are the limited number of available registers, the lack of performance indicators, and the limited use of non-linear adaptive methods. In order to circumvent these problems, we first introduce the generation of synthetic registers and discuss the influence of different kinds of noise to the modelling. Second, a method which is based on numerical (correlation coefficient) and statistical (analysis of variance, ANOVA) measures allows us to select the best recovery model. Finally, finite impulse response (FIR) and gamma neural networks are included in the adaptive noise cancellation (ANC) scheme in order to provide highly non-linear, dynamic capabilities to the recovery model. Neural networks are benchmarked with classical adaptive methods such as the least mean squares (LMS) and the normalized LMS (NLMS) algorithms in simulated and real registers and some conclusions are drawn. For synthetic registers, the most determinant factor in the identification of the models is the foetal-maternal signal-to-noise ratio (SNR). In addition, as the electromyogram contribution becomes more relevant, neural networks clearly outperform the LMS-based algorithm. From the ANOVA test, we found statistical differences between LMS-based models and neural models when complex situations (high foetal-maternal and foetal-noise SNRs) were present. These conclusions were confirmed after doing robustness tests on synthetic registers, visual inspection of the recovered signals and calculation of the recognition rates of foetal R-peaks for real situations. Finally, the best compromise between model complexity and outcomes was provided by the FIR neural network. Both

  7. Comparing various artificial neural network types for water temperature prediction in rivers

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Maciej J.; Napiorkowski, Jaroslaw J.; Osuch, Marzena

    2015-10-01

    A number of methods have been proposed for the prediction of streamwater temperature based on various meteorological and hydrological variables. The present study shows a comparison of few types of data-driven neural networks (multi-layer perceptron, product-units, adaptive-network-based fuzzy inference systems and wavelet neural networks) and nearest neighbour approach for short time streamwater temperature predictions in two natural catchments (mountainous and lowland) located in temperate climate zone, with snowy winters and hot summers. To allow wide applicability of such models, autoregressive inputs are not used and only easily available measurements are considered. Each neural network type is calibrated independently 100 times and the mean, median and standard deviation of the results are used for the comparison. Finally, the ensemble aggregation approach is tested. The results show that simple and popular multi-layer perceptron neural networks are in most cases not outperformed by more complex and advanced models. The choice of neural network is dependent on the way the models are compared. This may be a warning for anyone who wish to promote own models, that their superiority should be verified in different ways. The best results are obtained when mean, maximum and minimum daily air temperatures from the previous days are used as inputs, together with the current runoff and declination of the Sun from two recent days. The ensemble aggregation approach allows reducing the mean square error up to several percent, depending on the case, and noticeably diminishes differences in modelling performance obtained by various neural network types.

  8. A comparison between wavelet based static and dynamic neural network approaches for runoff prediction

    NASA Astrophysics Data System (ADS)

    Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.; Khan, Mudasser Muneer

    2016-04-01

    In order to predict runoff accurately from a rainfall event, the multilayer perceptron type of neural network models are commonly used in hydrology. Furthermore, the wavelet coupled multilayer perceptron neural network (MLPNN) models has also been found superior relative to the simple neural network models which are not coupled with wavelet. However, the MLPNN models are considered as static and memory less networks and lack the ability to examine the temporal dimension of data. Recurrent neural network models, on the other hand, have the ability to learn from the preceding conditions of the system and hence considered as dynamic models. This study for the first time explores the potential of wavelet coupled time lagged recurrent neural network (TLRNN) models for runoff prediction using rainfall data. The Discrete Wavelet Transformation (DWT) is employed in this study to decompose the input rainfall data using six of the most commonly used wavelet functions. The performance of the simple and the wavelet coupled static MLPNN models is compared with their counterpart dynamic TLRNN models. The study found that the dynamic wavelet coupled TLRNN models can be considered as alternative to the static wavelet MLPNN models. The study also investigated the effect of memory depth on the performance of static and dynamic neural network models. The memory depth refers to how much past information (lagged data) is required as it is not known a priori. The db8 wavelet function is found to yield the best results with the static MLPNN models and with the TLRNN models having small memory depths. The performance of the wavelet coupled TLRNN models with large memory depths is found insensitive to the selection of the wavelet function as all wavelet functions have similar performance.

  9. Optimal input sizes for neural network de-interlacing

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee

    2009-02-01

    Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.

  10. [Application of artificial neural networks in infectious diseases].

    PubMed

    Xu, Jun-fang; Zhou, Xiao-nong

    2011-02-28

    With the development of information technology, artificial neural networks has been applied to many research fields. Due to the special features such as nonlinearity, self-adaptation, and parallel processing, artificial neural networks are applied in medicine and biology. This review summarizes the application of artificial neural networks in the relative factors, prediction and diagnosis of infectious diseases in recent years. PMID:21823326

  11. Algorithm For A Self-Growing Neural Network

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.

    1996-01-01

    CID3 algorithm simulates self-growing neural network. Constructs decision trees equivalent to hidden layers of neural network. Based on ID3 algorithm, which dynamically generates decision tree while minimizing entropy of information. CID3 algorithm generates feedforward neural network by use of either crisp or fuzzy measure of entropy.

  12. An artificial neural network based matching metric for iris identification

    NASA Astrophysics Data System (ADS)

    Broussard, Randy P.; Kennell, Lauren R.; Ives, Robert W.; Rakvic, Ryan N.

    2008-02-01

    The iris is currently believed to be the most accurate biometric for human identification. The majority of fielded iris identification systems are based on the highly accurate wavelet-based Daugman algorithm. Another promising recognition algorithm by Ives et al uses Directional Energy features to create the iris template. Both algorithms use Hamming distance to compare a new template to a stored database. Hamming distance is an extremely fast computation, but weights all regions of the iris equally. Work from multiple authors has shown that different regions of the iris contain varying levels of discriminatory information. This research evaluates four post-processing similarity metrics for accuracy impacts on the Directional Energy and wavelets based algorithms. Each metric builds on the Hamming distance method in an attempt to use the template information in a more salient manner. A similarity metric extracted from the output stage of a feed-forward multi-layer perceptron artificial neural network demonstrated the most promise. Accuracy tables and ROC curves of tests performed on the publicly available Chinese Academy of Sciences Institute of Automation database show that the neural network based distance achieves greater accuracy than Hamming distance at every operating point, while adding less than one percent computational overhead.

  13. Automatic localization of vertebrae based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Yang, Feng; Mu, Wei; Yang, Caiyun; Yang, Xin; Tian, Jie

    2015-03-01

    Localization of the vertebrae is of importance in many medical applications. For example, the vertebrae can serve as the landmarks in image registration. They can also provide a reference coordinate system to facilitate the localization of other organs in the chest. In this paper, we propose a new vertebrae localization method using convolutional neural networks (CNN). The main advantage of the proposed method is the removal of hand-crafted features. We construct two training sets to train two CNNs that share the same architecture. One is used to distinguish the vertebrae from other tissues in the chest, and the other is aimed at detecting the centers of the vertebrae. The architecture contains two convolutional layers, both of which are followed by a max-pooling layer. Then the output feature vector from the maxpooling layer is fed into a multilayer perceptron (MLP) classifier which has one hidden layer. Experiments were performed on ten chest CT images. We used leave-one-out strategy to train and test the proposed method. Quantitative comparison between the predict centers and ground truth shows that our convolutional neural networks can achieve promising localization accuracy without hand-crafted features.

  14. Yield Stress Modeling of Electrorheological Fluids Using Neural Network

    NASA Astrophysics Data System (ADS)

    Wei, Kexiang; Meng, Guang

    Electrorheological (ER) fluids are a kind of smart materials whose rheological properties can be rapidly changed by applied electric fields. Many potential industrial applications of ER technology have been proposed. In order to formulate better ER fluids and design ER devices, it is important to predict the yield stress of ER fluids based on the ER fluids components and the operating conditions. This paper proposes a new method for predicting the yield stress of ER fluids with neural network (NN). A multilayer perceptron with a single hidden layer of neurons is used to model the ER effect. The data for training and test were produced from the simulation of previous proposed mathematical models. The Levernberg-Marquardt back propagation algorithm was selected for fast learning. The results show the neural network model can well approximate the previous theoretical model, and the predicted outputs of NN agree nearly with the theoretical model values under the same input, all of which demonstrate that it is possible to generate a robust NN model for rapidly predicting the yield stress of ER fluids under different input parameters.

  15. Statistical process control using optimized neural networks: a case study.

    PubMed

    Addeh, Jalil; Ebrahimzadeh, Ata; Azarbad, Milad; Ranaee, Vahid

    2014-09-01

    The most common statistical process control (SPC) tools employed for monitoring process changes are control charts. A control chart demonstrates that the process has altered by generating an out-of-control signal. This study investigates the design of an accurate system for the control chart patterns (CCPs) recognition in two aspects. First, an efficient system is introduced that includes two main modules: feature extraction module and classifier module. In the feature extraction module, a proper set of shape features and statistical feature are proposed as the efficient characteristics of the patterns. In the classifier module, several neural networks, such as multilayer perceptron, probabilistic neural network and radial basis function are investigated. Based on an experimental study, the best classifier is chosen in order to recognize the CCPs. Second, a hybrid heuristic recognition system is introduced based on cuckoo optimization algorithm (COA) algorithm to improve the generalization performance of the classifier. The simulation results show that the proposed algorithm has high recognition accuracy. PMID:24210290

  16. Unfolding the neutron spectrum of a NE213 scintillator using artificial neural networks.

    PubMed

    Sharghi Ido, A; Bonyadi, M R; Etaati, G R; Shahriari, M

    2009-10-01

    Artificial neural networks technology has been applied to unfold the neutron spectra from the pulse height distribution measured with NE213 liquid scintillator. Here, both the single and multi-layer perceptron neural network models have been implemented to unfold the neutron spectrum from an Am-Be neutron source. The activation function and the connectivity of the neurons have been investigated and the results have been analyzed in terms of the network's performance. The simulation results show that the neural network that utilizes the Satlins transfer function has the best performance. In addition, omitting the bias connection of the neurons improve the performance of the network. Also, the SCINFUL code is used for generating the response functions in the training phase of the process. Finally, the results of the neural network simulation have been compared with those of the FORIST unfolding code for both (241)Am-Be and (252)Cf neutron sources. The results of neural network are in good agreement with FORIST code. PMID:19586776

  17. Neural network computation with DNA strand displacement cascades.

    PubMed

    Qian, Lulu; Winfree, Erik; Bruck, Jehoshua

    2011-07-21

    The impressive capabilities of the mammalian brain--ranging from perception, pattern recognition and memory formation to decision making and motor activity control--have inspired their re-creation in a wide range of artificial intelligence systems for applications such as face recognition, anomaly detection, medical diagnosis and robotic vehicle control. Yet before neuron-based brains evolved, complex biomolecular circuits provided individual cells with the 'intelligent' behaviour required for survival. However, the study of how molecules can 'think' has not produced an equal variety of computational models and applications of artificial chemical systems. Although biomolecular systems have been hypothesized to carry out neural-network-like computations in vivo and the synthesis of artificial chemical analogues has been proposed theoretically, experimental work has so far fallen short of fully implementing even a single neuron. Here, building on the richness of DNA computing and strand displacement circuitry, we show how molecular systems can exhibit autonomous brain-like behaviours. Using a simple DNA gate architecture that allows experimental scale-up of multilayer digital circuits, we systematically transform arbitrary linear threshold circuits (an artificial neural network model) into DNA strand displacement cascades that function as small neural networks. Our approach even allows us to implement a Hopfield associative memory with four fully connected artificial neurons that, after training in silico, remembers four single-stranded DNA patterns and recalls the most similar one when presented with an incomplete pattern. Our results suggest that DNA strand displacement cascades could be used to endow autonomous chemical systems with the capability of recognizing patterns of molecular events, making decisions and responding to the environment. PMID:21776082

  18. Adaptive neural network nonlinear control for BTT missile based on the differential geometry method

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Wang, Yongji; Xu, Jiangsheng

    2007-11-01

    A new nonlinear control strategy incorporated the differential geometry method with adaptive neural networks is presented for the nonlinear coupling system of Bank-to-Turn missile in reentry phase. The basic control law is designed using the differential geometry feedback linearization method, and the online learning neural networks are used to compensate the system errors due to aerodynamic parameter errors and external disturbance in view of the arbitrary nonlinear mapping and rapid online learning ability for multi-layer neural networks. The online weights and thresholds tuning rules are deduced according to the tracking error performance functions by Levenberg-Marquardt algorithm, which will make the learning process faster and more stable. The six degree of freedom simulation results show that the attitude angles can track the desired trajectory precisely. It means that the proposed strategy effectively enhance the stability, the tracking performance and the robustness of the control system.

  19. Response of the parameters of a neural network to pseudoperiodic time series

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Weng, Tongfeng; Small, Michael

    2014-02-01

    We propose a representation plane constructed from parameters of a multilayer neural network, with the aim of characterizing the dynamical character of a learned time series. We find that fluctuation of this plane reveals distinct features of the time series. Specifically, a periodic representation plane corresponds to a periodic time series, even when contaminated with strong observational noise or dynamical noise. We present a theoretical explanation for how the neural network training algorithm adjusts parameters of this representation plane and thereby encodes the specific characteristics of the underlying system. This ability, which is intrinsic to the architecture of the neural network, can be employed to distinguish the chaotic time series from periodic counterparts. It provides a new path toward identifying the dynamics of pseudoperiodic time series. Furthermore, we extract statistics from the representation plane to quantify its character. We then validate this idea with various numerical data generated by the known periodic and chaotic dynamics and experimentally recorded human electrocardiogram data.

  20. Neural network controller development for a magnetically suspended flywheel energy storage system

    NASA Technical Reports Server (NTRS)

    Fittro, Roger L.; Pang, Da-Chen; Anand, Davinder K.

    1994-01-01

    A neural network controller has been developed to accommodate disturbances and nonlinearities and improve the robustness of a magnetically suspended flywheel energy storage system. The controller is trained using the back propagation-through-time technique incorporated with a time-averaging scheme. The resulting nonlinear neural network controller improves system performance by adapting flywheel stiffness and damping based on operating speed. In addition, a hybrid multi-layered neural network controller is developed off-line which is capable of improving system performance even further. All of the research presented in this paper was implemented via a magnetic bearing computer simulation. However, careful attention was paid to developing a practical methodology which will make future application to the actual bearing system fairly straightforward.

  1. Control chart pattern recognition using K-MICA clustering and neural networks.

    PubMed

    Ebrahimzadeh, Ataollah; Addeh, Jalil; Rahmani, Zahra

    2012-01-01

    Automatic recognition of abnormal patterns in control charts has seen increasing demands nowadays in manufacturing processes. This paper presents a novel hybrid intelligent method (HIM) for recognition of the common types of control chart pattern (CCP). The proposed method includes two main modules: a clustering module and a classifier module. In the clustering module, the input data is first clustered by a new technique. This technique is a suitable combination of the modified imperialist competitive algorithm (MICA) and the K-means algorithm. Then the Euclidean distance of each pattern is computed from the determined clusters. The classifier module determines the membership of the patterns using the computed distance. In this module, several neural networks, such as the multilayer perceptron, probabilistic neural networks, and the radial basis function neural networks, are investigated. Using the experimental study, we choose the best classifier in order to recognize the CCPs. Simulation results show that a high recognition accuracy, about 99.65%, is achieved. PMID:22035774

  2. Estimation of Resonant Frequency of a Circular Microstrip Antenna Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Singh, Jagtar; Singh, A. P.; Kamal, T. S.

    2012-03-01

    In recent years the art of using artificial neural networks for wireless communication engineers has been gaining momentum. In this paper a general procedure is suggested for estimating the resonant frequency of circular microstrip patch antenna using artificial neural networks. The method of moments (MOM) based IE3D software was used to generate data dictionary for training and validation set of ANN. The proposed technique uses multilayer feed-forward back-propagation artificial neural network with one hidden layers for estimating the resonant frequency of a circular microstrip antenna. A relative performance of the different training algorithms is carried out for estimating the resonant frequency with particular attention paid to the speed of computation and accuracy achieved. This type of performance comparison has not been attempted so far.

  3. Intrinsic adaptation in autonomous recurrent neural networks.

    PubMed

    Marković, Dimitrije; Gros, Claudius

    2012-02-01

    A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the quality of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics. PMID:22091667

  4. Nonlinear V1 responses to natural scenes revealed by neural network analysis.

    PubMed

    Prenger, Ryan; Wu, Michael C-K; David, Stephen V; Gallant, Jack L

    2004-01-01

    A key goal in the study of visual processing is to obtain a comprehensive description of the relationship between visual stimuli and neuronal responses. One way to guide the search for models is to use a general nonparametric regression algorithm, such as a neural network. We have developed a multilayer feed-forward network algorithm that can be used to characterize nonlinear stimulus-response mapping functions of neurons in primary visual cortex (area V1) using natural image stimuli. The network is capable of extracting several known V1 response properties such as: orientation and spatial frequency tuning, the spatial phase invariance of complex cells, and direction selectivity. We present details of a method for training networks and visualizing their properties. We also compare how well conventional explicit models and those developed using neural networks can predict novel responses to natural scenes. PMID:15288891

  5. Reinforcement and backpropagation training for an optical neural network using self-lensing effects.

    PubMed

    Cruz-Cabrera, A A; Yang, M; Cui, G; Behrman, E C; Steck, J E; Skinner, S R

    2000-01-01

    The optical bench training of an optical feedforward neural network, developed by the authors, is presented. The network uses an optical nonlinear material for neuron processing and a trainable applied optical pattern as the network weights. The nonlinear material, with the applied weight pattern, modulates the phase front of a forward propagating information beam by dynamically altering the index of refraction profile of the material. To verify that the network can be trained in real time, six logic gates were trained using a reinforcement training paradigm. More importantly, to demonstrate optical backpropagation, three gates were trained via optical error backpropagation. The output error is optically backpropagated, detected with a CCD camera, and the weight pattern is updated and stored on a computer. The obtained results lay the ground work for the implementation of multilayer neural networks that are trained using optical error backpropagation and are able to solve more complex problems. PMID:18249868

  6. Classifying multispectral data by neural networks

    NASA Technical Reports Server (NTRS)

    Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.

    1993-01-01

    Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.

  7. Proposal of a multi-layer network architecture for OBS/GMPLS network interworking

    NASA Astrophysics Data System (ADS)

    Guo, Hongxiang; Tsuritani, Takehiro; Yin, Yawei; Otani, Tomohiro; Wu, Jian

    2007-11-01

    In order to enable the existing optical circuit switching (OCS) network to support both wavelength and subwavelength granularities, this paper proposes overlay-based multi-layer network architecture for interworking the generalized multi-protocol label switching (GMPLS) controlled OCS network with optical burst switching (OBS) networks. A dedicated GMPLS border controller with necessary GMPLS extensions, including group label switching path (LSP) provisioning, node capability advertisement, and standard wavelength label as well as wavelength availability advertisement, is introduced in this multi-layer network to enable a simple but flexible interworking operation. The feasibility of this proposal is experimentally confirmed by demonstrating an OBS/GMPLS testbed, in which the extended node capability advertisement and group LSP functions successfully enabled the burst header packet (BHP) and data burst (DB) to transmit over a GMPLS-controlled transparent OCS network.

  8. Exploring the Combination of Dempster-Shafer Theory and Neural Network for Predicting Trust and Distrust.

    PubMed

    Wang, Xin; Wang, Ying; Sun, Hongbin

    2016-01-01

    In social media, trust and distrust among users are important factors in helping users make decisions, dissect information, and receive recommendations. However, the sparsity and imbalance of social relations bring great difficulties and challenges in predicting trust and distrust. Meanwhile, there are numerous inducing factors to determine trust and distrust relations. The relationship among inducing factors may be dependency, independence, and conflicting. Dempster-Shafer theory and neural network are effective and efficient strategies to deal with these difficulties and challenges. In this paper, we study trust and distrust prediction based on the combination of Dempster-Shafer theory and neural network. We firstly analyze the inducing factors about trust and distrust, namely, homophily, status theory, and emotion tendency. Then, we quantify inducing factors of trust and distrust, take these features as evidences, and construct evidence prototype as input nodes of multilayer neural network. Finally, we propose a framework of predicting trust and distrust which uses multilayer neural network to model the implementing process of Dempster-Shafer theory in different hidden layers, aiming to overcome the disadvantage of Dempster-Shafer theory without optimization method. Experimental results on a real-world dataset demonstrate the effectiveness of the proposed framework. PMID:27034651

  9. Exploring the Combination of Dempster-Shafer Theory and Neural Network for Predicting Trust and Distrust

    PubMed Central

    Wang, Xin; Wang, Ying; Sun, Hongbin

    2016-01-01

    In social media, trust and distrust among users are important factors in helping users make decisions, dissect information, and receive recommendations. However, the sparsity and imbalance of social relations bring great difficulties and challenges in predicting trust and distrust. Meanwhile, there are numerous inducing factors to determine trust and distrust relations. The relationship among inducing factors may be dependency, independence, and conflicting. Dempster-Shafer theory and neural network are effective and efficient strategies to deal with these difficulties and challenges. In this paper, we study trust and distrust prediction based on the combination of Dempster-Shafer theory and neural network. We firstly analyze the inducing factors about trust and distrust, namely, homophily, status theory, and emotion tendency. Then, we quantify inducing factors of trust and distrust, take these features as evidences, and construct evidence prototype as input nodes of multilayer neural network. Finally, we propose a framework of predicting trust and distrust which uses multilayer neural network to model the implementing process of Dempster-Shafer theory in different hidden layers, aiming to overcome the disadvantage of Dempster-Shafer theory without optimization method. Experimental results on a real-world dataset demonstrate the effectiveness of the proposed framework. PMID:27034651

  10. A Topological Perspective of Neural Network Structure

    NASA Astrophysics Data System (ADS)

    Sizemore, Ann; Giusti, Chad; Cieslak, Matthew; Grafton, Scott; Bassett, Danielle

    The wiring patterns of white matter tracts between brain regions inform functional capabilities of the neural network. Indeed, densely connected and cyclically arranged cognitive systems may communicate and thus perform distinctly. However, previously employed graph theoretical statistics are local in nature and thus insensitive to such global structure. Here we present an investigation of the structural neural network in eight healthy individuals using persistent homology. An extension of homology to weighted networks, persistent homology records both circuits and cliques (all-to-all connected subgraphs) through a repetitive thresholding process, thus perceiving structural motifs. We report structural features found across patients and discuss brain regions responsible for these patterns, finally considering the implications of such motifs in relation to cognitive function.

  11. Fuzzy logic and neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  12. Neural networks: Application to medical imaging

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  13. Controlling neural network responsiveness: tradeoffs and constraints

    PubMed Central

    Keren, Hanna; Marom, Shimon

    2014-01-01

    In recent years much effort is invested in means to control neural population responses at the whole brain level, within the context of developing advanced medical applications. The tradeoffs and constraints involved, however, remain elusive due to obvious complications entailed by studying whole brain dynamics. Here, we present effective control of response features (probability and latency) of cortical networks in vitro over many hours, and offer this approach as an experimental toy for studying controllability of neural networks in the wider context. Exercising this approach we show that enforcement of stable high activity rates by means of closed loop control may enhance alteration of underlying global input–output relations and activity dependent dispersion of neuronal pair-wise correlations across the network. PMID:24808860

  14. Extraction of Multilayered Social Networks from Activity Data

    PubMed Central

    Bródka, Piotr; Kazienko, Przemysław; Gaworecki, Jarosław

    2014-01-01

    The data gathered in all kinds of web-based systems, which enable users to interact with each other, provides an opportunity to extract social networks that consist of people and relationships between them. The emerging structures are very complex due to the number and type of discovered connections. In web-based systems, the characteristic element of each interaction between users is that there is always an object that serves as a communication medium. This can be, for example, an e-mail sent from one user to another or post at the forum authored by one user and commented on by others. Based on these objects and activities that users perform towards them, different kinds of relationships can be identified and extracted. Additional challenge arises from the fact that hierarchies can exist between objects; for example, a forum consists of one or more groups of topics, and each of them contains topics that finally include posts. In this paper, we propose a new method for creation of multilayered social network based on the data about users activities towards different types of objects between which the hierarchy exists. Due to the flattening, preprocessing procedure of new layers and new relationships in the multilayered social network can be identified and analysed. PMID:25105159

  15. Computationally Efficient Neural Network Intrusion Security Awareness

    SciTech Connect

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  16. Multiscale Modeling of Cortical Neural Networks

    NASA Astrophysics Data System (ADS)

    Torben-Nielsen, Benjamin; Stiefel, Klaus M.

    2009-09-01

    In this study, we describe efforts at modeling the electrophysiological dynamics of cortical networks in a multi-scale manner. Specifically, we describe the implementation of a network model composed of simple single-compartmental neuron models, in which a single complex multi-compartmental model of a pyramidal neuron is embedded. The network is capable of generating Δ (2 Hz, observed during deep sleep states) and γ (40 Hz, observed during wakefulness) oscillations, which are then imposed onto the multi-compartmental model, thus providing realistic, dynamic boundary conditions. We furthermore discuss the challenges and chances involved in multi-scale modeling of neural function.

  17. Tumor Diagnosis Using Backpropagation Neural Network Method

    NASA Astrophysics Data System (ADS)

    Ma, Lixing; Looney, Carl; Sukuta, Sydney; Bruch, Reinhard; Afanasyeva, Natalia

    1998-05-01

    For characterization of skin cancer, an artificial neural network (ANN) method has been developed to diagnose normal tissue, benign tumor and melanoma. The pattern recognition is based on a three-layer neural network fuzzy learning system. In this study, the input neuron data set is the Fourier Transform infrared (FT-IR)spectrum obtained by a new Fiberoptic Evanescent Wave Fourier Transform Infrared (FEW-FTIR) spectroscopy method in the range of 1480 to 1850 cm-1. Ten input features are extracted from the absorbency values in this region. A single hidden layer of neural nodes with sigmoids activation functions clusters the feature space into small subclasses and the output nodes are separated in different nonconvex classes to permit nonlinear discrimination of disease states. The output is classified as three classes: normal tissue, benign tumor and melanoma. The results obtained from the neural network pattern recognition are shown to be consistent with traditional medical diagnosis. Input features have also been extracted from the absorbency spectra using chemical factor analysis. These abstract features or factors are also used in the classification.

  18. Neural networks in the process industries

    SciTech Connect

    Ben, L.R.; Heavner, L.

    1996-12-01

    Neural networks, or more precisely, artificial neural networks (ANNs), are rapidly gaining in popularity. They first began to appear on the process-control scene in the early 1990s, but have been a research focus for more than 30 years. Neural networks are really empirical models that approximate the way man thinks neurons in the human brain work. Neural-net technology is not trying to produce computerized clones, but to model nature in an effort to mimic some of the brain`s capabilities. Modeling, for the purposes of this article, means developing a mathematical description of physical phenomena. The physics and chemistry of industrial processes are usually quite complex and sometimes poorly understood. Our process understanding, and our imperfect ability to describe complexity in mathematical terms, limit fidelity of first-principle models. Computational requirements for executing these complex models are a further limitation. It is often not possible to execute first-principle model algorithms at the high rate required for online control. Nevertheless, rigorous first principle models are commonplace design tools. Process control is another matter. Important model inputs are often not available as process measurements, making real-time application difficult. In fact, engineers often use models to infer unavailable measurements. 5 figs.

  19. Adaptive Neural Networks for Automatic Negotiation

    SciTech Connect

    Sakas, D. P.; Vlachos, D. S.; Simos, T. E.

    2007-12-26

    The use of fuzzy logic and fuzzy neural networks has been found effective for the modelling of the uncertain relations between the parameters of a negotiation procedure. The problem with these configurations is that they are static, that is, any new knowledge from theory or experiment lead to the construction of entirely new models. To overcome this difficulty, we apply in this work, an adaptive neural topology to model the negotiation process. Finally a simple simulation is carried in order to test the new method.

  20. Pruning Neural Networks with Distribution Estimation Algorithms

    SciTech Connect

    Cantu-Paz, E

    2003-01-15

    This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than the original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.

  1. Generalization of features in the assembly neural networks.

    PubMed

    Goltsev, Alexander; Wunsch, Donald C

    2004-02-01

    The purpose of the paper is an experimental study of the formation of class descriptions, taking place during learning, in assembly neural networks. The assembly neural network is artificially partitioned into several sub-networks according to the number of classes that the network has to recognize. The features extracted from input data are represented in neural column structures of the sub-networks. Hebbian neural assemblies are formed in the column structure of the sub-networks by weight adaptation. A specific class description is formed in each sub-network of the assembly neural network due to intersections between the neural assemblies. The process of formation of class descriptions in the sub-networks is interpreted as feature generalization. A set of special experiments is performed to study this process, on a task of character recognition using the MNIST database. PMID:15034946

  2. Neural networks type MLP in the process of identification chosen varieties of maize

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Nowakowski, K.; Tomczak, R.

    2011-06-01

    During the adaptation process of the weights vector that occurs in the iterative presentation of the teaching vector, the the MLP type artificial neural network (MultiLayer Perceptron) attempts to learn the structure of the data. Such a network can learn to recognise aggregates of input data occurring in the input data set regardless of the assumed criteria of similarity and the quantity of the data explored. The MLP type neural network can be also used to detect regularities occurring in the obtained graphic empirical data. The neuronal image analysis is then a new field of digital processing of signals. It is possible to use it to identify chosen objects given in the form of bitmap. If at the network input, a new unknown case appears which the network is unable to recognise, it means that it is different from all the classes known previously. The MLP type artificial neural network taught in this way can serve as a detector signalling the appearance of a widely understood novelty. Such a network can also look for similarities between the known data and the noisy data. In this way, it is able to identify fragments of images presented in photographs of e.g. maze's grain. The purpose of the research was to use the MLP neural networks in the process of identification of chosen varieties of maize with the use of image analysis method. The neuronal classification shapes of grains was performed with the use of the Johan Gielis super formula.

  3. VLSI implementable neural networks for target tracking

    NASA Astrophysics Data System (ADS)

    Himes, Glenn S.; Inigo, Rafael M.; Narathong, Chiewcharn

    1991-08-01

    This paper describes part of an integrated system for target tracking. The image is acquired, edge detected, and segmented by a subsystem not discussed in this paper. Algorithms to determine the centroid of a windowed target using neural networks are developed. Further, once the target centroid is determined, it is continuously updated in order to track the trajectory, since the centroid location is not dependent on scaling or rotation on the optical axis. The image is then mapped to a log-spiral grid. A conformal transformation is used to map the log-spiral grid to a computation plane in which rotations and scalings are transformed to displacements along the vertical and horizonal axes, respectively. The images in this plane are used for recognition. The recognition algorithms are the subject of another paper. A second neural network, also described in this paper, is then used to determine object rotation and scaling. The algorithm used by this network is an original line correlator tracker which, as the name indicates, uses linear instead of 2D correlations. Simulation results using ICBM images are presented for both the centroid neural net and the rotation-scaling detection network.

  4. Functional expansion representations of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1992-01-01

    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  5. Convolutional Neural Network Based dem Super Resolution

    NASA Astrophysics Data System (ADS)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  6. Character Recognition Using Genetically Trained Neural Networks

    SciTech Connect

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the amount of

  7. Neural networks as a control methodology

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1990-01-01

    While conventional computers must be programmed in a logical fashion by a person who thoroughly understands the task to be performed, the motivation behind neural networks is to develop machines which can train themselves to perform tasks, using available information about desired system behavior and learning from experience. There are three goals of this fellowship program: (1) to evaluate various neural net methods and generate computer software to implement those deemed most promising on a personal computer equipped with Matlab; (2) to evaluate methods currently in the professional literature for system control using neural nets to choose those most applicable to control of flexible structures; and (3) to apply the control strategies chosen in (2) to a computer simulation of a test article, the Control Structures Interaction Suitcase Demonstrator, which is a portable system consisting of a small flexible beam driven by a torque motor and mounted on springs tuned to the first flexible mode of the beam. Results of each are discussed.

  8. On lateral competition in dynamic neural networks

    SciTech Connect

    Bellyustin, N.S.

    1995-02-01

    Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.

  9. Neural network for tsunami and runup forecast

    NASA Astrophysics Data System (ADS)

    Namekar, Shailesh; Yamazaki, Yoshiki; Cheung, Kwok Fai

    2009-04-01

    This paper examines the use of neural network to model nonlinear tsunami processes for forecasting of coastal waveforms and runup. The three-layer network utilizes a radial basis function in the hidden, middle layer for nonlinear transformation of input waveforms near the tsunami source. Events based on the 2006 Kuril Islands tsunami demonstrate the implementation and capability of the network. Division of the Kamchatka-Kuril subduction zone into a number of subfaults facilitates development of a representative tsunami dataset using a nonlinear long-wave model. The computed waveforms near the tsunami source serve as the input and the far-field waveforms and runup provide the target output for training of the network through a back-propagation algorithm. The trained network reproduces the resonance of tsunami waves and the topography-dominated runup patterns at Hawaii's coastlines from input water-level data off the Aleutian Islands.

  10. Interdependent Multi-Layer Networks: Modeling and Survivability Analysis with Applications to Space-Based Networks

    PubMed Central

    Castet, Jean-Francois; Saleh, Joseph H.

    2013-01-01

    This article develops a novel approach and algorithmic tools for the modeling and survivability analysis of networks with heterogeneous nodes, and examines their application to space-based networks. Space-based networks (SBNs) allow the sharing of spacecraft on-orbit resources, such as data storage, processing, and downlink. Each spacecraft in the network can have different subsystem composition and functionality, thus resulting in node heterogeneity. Most traditional survivability analyses of networks assume node homogeneity and as a result, are not suited for the analysis of SBNs. This work proposes that heterogeneous networks can be modeled as interdependent multi-layer networks, which enables their survivability analysis. The multi-layer aspect captures the breakdown of the network according to common functionalities across the different nodes, and it allows the emergence of homogeneous sub-networks, while the interdependency aspect constrains the network to capture the physical characteristics of each node. Definitions of primitives of failure propagation are devised. Formal characterization of interdependent multi-layer networks, as well as algorithmic tools for the analysis of failure propagation across the network are developed and illustrated with space applications. The SBN applications considered consist of several networked spacecraft that can tap into each other's Command and Data Handling subsystem, in case of failure of its own, including the Telemetry, Tracking and Command, the Control Processor, and the Data Handling sub-subsystems. Various design insights are derived and discussed, and the capability to perform trade-space analysis with the proposed approach for various network characteristics is indicated. The select results here shown quantify the incremental survivability gains (with respect to a particular class of threats) of the SBN over the traditional monolith spacecraft. Failure of the connectivity between nodes is also examined, and the

  11. A classifier neural network for rotordynamic systems

    NASA Astrophysics Data System (ADS)

    Ganesan, R.; Jionghua, Jin; Sankar, T. S.

    1995-07-01

    A feedforward backpropagation neural network is formed to identify the stability characteristic of a high speed rotordynamic system. The principal focus resides in accounting for the instability due to the bearing clearance effects. The abnormal operating condition of 'normal-loose' Coulomb rub, that arises in units supported by hydrodynamic bearings or rolling element bearings, is analysed in detail. The multiple-parameter stability problem is formulated and converted to a set of three-parameter algebraic inequality equations. These three parameters map the wider range of physical parameters of commonly-used rotordynamic systems into a narrow closed region, that is used in the supervised learning of the neural network. A binary-type state of the system is expressed through these inequalities that are deduced from the analytical simulation of the rotor system. Both the hidden layer as well as functional-link networks are formed and the superiority of the functional-link network is established. Considering the real time interpretation and control of the rotordynamic system, the network reliability and the learning time are used as the evaluation criteria to assess the superiority of the functional-link network. This functional-link network is further trained using the parameter values of selected rotor systems, and the classifier network is formed. The success rate of stability status identification is obtained to assess the potentials of this classifier network. The classifier network is shown that it can also be used, for control purposes, as an 'advisory' system that suggests the optimum way of parameter adjustment.

  12. Analysis of Stochastic Response of Neural Networks with Stochastic Input

    Energy Science and Technology Software Center (ESTSC)

    1996-10-10

    Software permits the user to extend capability of his/her neural network to include probablistic characteristics of input parameter. User inputs topology and weights associated with neural network along with distributional characteristics of input parameters. Network response is provided via a cumulative density function of network response variable.

  13. Neural dynamics in superconducting networks

    NASA Astrophysics Data System (ADS)

    Segall, Kenneth; Schult, Dan; Crotty, Patrick; Miller, Max

    2012-02-01

    We discuss the use of Josephson junction networks as analog models for simulating neuron behaviors. A single unit called a ``Josephson Junction neuron'' composed of two Josephson junctions [1] displays behavior that shows characteristics of single neurons such as action potentials, thresholds and refractory periods. Synapses can be modeled as passive filters and can be used to connect neurons together. The sign of the bias current to the Josephson neuron can be used to determine if the neuron is excitatory or inhibitory. Due to the intrinsic speed of Josephson junctions and their scaling properties as analog models, a large network of Josephson neurons measured over typical lab times contains dynamics which would essentially be impossible to calculate on a computer We discuss the operating principle of the Josephson neuron, coupling Josephson neurons together to make large networks, and the Kuramoto-like synchronization of a system of disordered junctions.[4pt] [1] ``Josephson junction simulation of neurons,'' P. Crotty, D. Schult and K. Segall, Physical Review E 82, 011914 (2010).

  14. Image texture segmentation using a neural network

    NASA Astrophysics Data System (ADS)

    Sayeh, Mohammed R.; Athinarayanan, Ragu; Dhali, Pushpuak

    1992-09-01

    In this paper we use a neural network called the Lyapunov associative memory (LYAM) system to segment image texture into different categories or clusters. The LYAM system is constructed by a set of ordinary differential equations which are simulated on a digital computer. The clustering can be achieved by using a single tuning parameter in the simplest model. Pattern classes are represented by the stable equilibrium states of the system. Design of the system is based on synthesizing two local energy functions, namely, the learning and recall energy functions. Before the implementation of the segmentation process, a Gauss-Markov random field (GMRF) model is applied to the raw image. This application suitably reduces the image data and prepares the texture information for the neural network process. We give a simple image example illustrating the capability of the technique. The GMRF-generated features are also used for a clustering, based on the Euclidean distance.

  15. Training neural networks with heterogeneous data.

    PubMed

    Drakopoulos, John A; Abdulkader, Ahmad

    2005-01-01

    Data pruning and ordered training are two methods and the results of a small theory that attempts to formalize neural network training with heterogeneous data. Data pruning is a simple process that attempts to remove noisy data. Ordered training is a more complex method that partitions the data into a number of categories and assigns training times to those assuming that data size and training time have a polynomial relation. Both methods derive from a set of premises that form the 'axiomatic' basis of our theory. Both methods have been applied to a time-delay neural network-which is one of the main learners in Microsoft's Tablet PC handwriting recognition system. Their effect is presented in this paper along with a rough estimate of their effect on the overall multi-learner system. The handwriting data and the chosen language are Italian. PMID:16095874

  16. A Novel Higher Order Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuxiang

    2010-05-01

    In this paper a new Higher Order Neural Network (HONN) model is introduced and applied in several data mining tasks. Data Mining extracts hidden patterns and valuable information from large databases. A hyperbolic tangent function is used as the neuron activation function for the new HONN model. Experiments are conducted to demonstrate the advantages and disadvantages of the new HONN model, when compared with several conventional Artificial Neural Network (ANN) models: Feedforward ANN with the sigmoid activation function; Feedforward ANN with the hyperbolic tangent activation function; and Radial Basis Function (RBF) ANN with the Gaussian activation function. The experimental results seem to suggest that the new HONN holds higher generalization capability as well as abilities in handling missing data.

  17. Application of neural networks in space construction

    NASA Technical Reports Server (NTRS)

    Thilenius, Stephen C.; Barnes, Frank

    1990-01-01

    When trying to decide what task should be done by robots and what tasks should be done by humans with respect to space construction, there has been one decisive barrier which ultimately divides the tasks: can a computer do the job? Von Neumann type computers have great difficulty with problems that the human brain seems to do instantaneously and with little effort. Some of these problems are pattern recognition, speech recognition, content addressable memories, and command interpretation. In an attempt to simulate these talents of the human brain, much research was currently done into the operations and construction of artificial neural networks. The efficiency of the interface between man and machine, robots in particular, can therefore be greatly improved with the use of neural networks. For example, wouldn't it be easier to command a robot to 'fetch an object' rather then having to remotely control the entire operation with remote control?

  18. Automatic breast density classification using neural network

    NASA Astrophysics Data System (ADS)

    Arefan, D.; Talebpour, A.; Ahmadinejhad, N.; Kamali Asl, A.

    2015-12-01

    According to studies, the risk of breast cancer directly associated with breast density. Many researches are done on automatic diagnosis of breast density using mammography. In the current study, artifacts of mammograms are removed by using image processing techniques and by using the method presented in this study, including the diagnosis of points of the pectoral muscle edges and estimating them using regression techniques, pectoral muscle is detected with high accuracy in mammography and breast tissue is fully automatically extracted. In order to classify mammography images into three categories: Fatty, Glandular, Dense, a feature based on difference of gray-levels of hard tissue and soft tissue in mammograms has been used addition to the statistical features and a neural network classifier with a hidden layer. Image database used in this research is the mini-MIAS database and the maximum accuracy of system in classifying images has been reported 97.66% with 8 hidden layers in neural network.

  19. Toward modeling a dynamic biological neural network.

    PubMed

    Ross, M D; Dayhoff, J E; Mugler, D H

    1990-01-01

    Mammalian macular endorgans are linear bioaccelerometers located in the vestibular membranous labyrinth of the inner ear. In this paper, the organization of the endorgan is interpreted on physical and engineering principles. This is a necessary prerequisite to mathematical and symbolic modeling of information processing by the macular neural network. Mathematical notations that describe the functioning system were used to produce a novel, symbolic model. The model is six-tiered and is constructed to mimic the neural system. Initial simulations show that the network functions best when some of the detecting elements (type I hair cells) are excitatory and others (type II hair cells) are weakly inhibitory. The simulations also illustrate the importance of disinhibition of receptors located in the third tier in shaping nerve discharge patterns at the sixth tier in the model system. PMID:11538873

  20. Neural Flows in Hopfield Network Approach

    NASA Astrophysics Data System (ADS)

    Ionescu, Carmen; Panaitescu, Emilian; Stoicescu, Mihai

    2013-12-01

    In most of the applications involving neural networks, the main problem consists in finding an optimal procedure to reduce the real neuron to simpler models which still express the biological complexity but allow highlighting the main characteristics of the system. We effectively investigate a simple reduction procedure which leads from complex models of Hodgkin-Huxley type to very convenient binary models of Hopfield type. The reduction will allow to describe the neuron interconnections in a quite large network and to obtain information concerning its symmetry and stability. Both cases, on homogeneous voltage across the membrane and inhomogeneous voltage along the axon will be tackled out. Few numerical simulations of the neural flow based on the cable-equation will be also presented.

  1. On analog implementations of discrete neural networks

    SciTech Connect

    Beiu, V.; Moore, K.R.

    1998-12-01

    The paper will show that in order to obtain minimum size neural networks (i.e., size-optimal) for implementing any Boolean function, the nonlinear activation function of the neutrons has to be the identity function. The authors shall shortly present many results dealing with the approximation capabilities of neural networks, and detail several bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions they will show that implementing Boolean functions can be done using neurons having an identity nonlinear function. It follows that size-optimal solutions can be obtained only using analog circuitry. Conclusions, and several comments on the required precision are ending the paper.

  2. A nonlinear image reconstruction technique for ECT using a combined neural network approach

    NASA Astrophysics Data System (ADS)

    Marashdeh, Q.; Warsito, W.; Fan, L.-S.; Teixeira, F. L.

    2006-08-01

    A combined multilayer feed-forward neural network (MLFF-NN) and analogue Hopfield network is developed for nonlinear image reconstruction of electrical capacitance tomography (ECT). The (nonlinear) forward problem in ECT is solved using the MLFF-NN trained with a set of capacitance data from measurements based on a back-propagation training algorithm with regularization. The inverse problem is solved using an analogue Hopfield network based on a neural-network multi-criteria optimization image reconstruction technique (HN-MOIRT). The nonlinear image reconstruction based on this combined MLFF-NN + HN-MOIRT approach is tested on measured capacitance data not used in training to reconstruct the permittivity distribution. The performance of the technique is compared against commonly used linear Landweber and semi-linear image reconstruction techniques, showing superiority in terms of both stability and quality of reconstructed images.

  3. Nonlinear dynamic system identification using Chebyshev functional link artificial neural networks.

    PubMed

    Patra, J C; Kot, A C

    2002-01-01

    A computationally efficient artificial neural network (ANN) for the purpose of dynamic nonlinear system identification is proposed. The major drawback of feedforward neural networks, such as multilayer perceptrons (MLPs) trained with the backpropagation (BP) algorithm, is that they require a large amount of computation for learning. We propose a single-layer functional-link ANN (FLANN) in which the need for a hidden layer is eliminated by expanding the input pattern by Chebyshev polynomials. The novelty of this network is that it requires much less computation than that of a MLP. We have shown its effectiveness in the problem of nonlinear dynamic system identification. In the presence of additive Gaussian noise, the performance of the proposed network is found to be similar or superior to that of a MLP. A performance comparison in terms of computational complexity has also been carried out. PMID:18238146

  4. Neural network error correction for solving coupled ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  5. Neural network with dynamically adaptable neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.

  6. Reconstructing irregularly sampled images by neural networks

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Yellott, John I., Jr.

    1989-01-01

    Neural-network-like models of receptor position learning and interpolation function learning are being developed as models of how the human nervous system might handle the problems of keeping track of the receptor positions and interpolating the image between receptors. These models may also be of interest to designers of image processing systems desiring the advantages of a retina-like image sampling array.

  7. Artificial neural network cardiopulmonary modeling and diagnosis

    DOEpatents

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  8. Artificial neural network cardiopulmonary modeling and diagnosis

    DOEpatents

    Kangas, L.J.; Keller, P.E.

    1997-10-28

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis. 12 figs.

  9. Analog hardware for learning neural networks

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P. (Inventor)

    1991-01-01

    This is a recurrent or feedforward analog neural network processor having a multi-level neuron array and a synaptic matrix for storing weighted analog values of synaptic connection strengths which is characterized by temporarily changing one connection strength at a time to determine its effect on system output relative to the desired target. That connection strength is then adjusted based on the effect, whereby the processor is taught the correct response to training examples connection by connection.

  10. Hybrid pyramid/neural network object recognition

    NASA Astrophysics Data System (ADS)

    Anandan, P.; Burt, Peter J.; Pearson, John C.; Spence, Clay D.

    1994-02-01

    This work concerns computationally efficient computer vision methods for the search for and identification of small objects in large images. The approach combines neural network pattern recognition with pyramid-based coarse-to-fine search, in a way that eliminates the drawbacks of each method when used by itself and, in addition, improves object identification through learning and exploiting the low-resolution image context associated with the objects. The presentation will describe the system architecture and the performance on illustrative problems.

  11. Nonvolatile Array Of Synapses For Neural Network

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Elements of array programmed with help of ultraviolet light. A 32 x 32 very-large-scale integrated-circuit array of electronic synapses serves as building-block chip for analog neural-network computer. Synaptic weights stored in nonvolatile manner. Makes information content of array invulnerable to loss of power, and, by eliminating need for circuitry to refresh volatile synaptic memory, makes architecture simpler and more compact.

  12. Diagnosing process faults using neural network models

    SciTech Connect

    Buescher, K.L.; Jones, R.D.; Messina, M.J.

    1993-11-01

    In order to be of use for realistic problems, a fault diagnosis method should have the following three features. First, it should apply to nonlinear processes. Second, it should not rely on extensive amounts of data regarding previous faults. Lastly, it should detect faults promptly. The authors present such a scheme for static (i.e., non-dynamic) systems. It involves using a neural network to create an associative memory whose fixed points represent the normal behavior of the system.

  13. Learning in Neural Networks: VLSI Implementation Strategies

    NASA Technical Reports Server (NTRS)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  14. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  15. Applying neural networks to optimize instrumentation performance

    SciTech Connect

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  16. Neural network architectures to analyze OPAD data

    NASA Technical Reports Server (NTRS)

    Whitaker, Kevin W.

    1992-01-01

    A prototype Optical Plume Anomaly Detection (OPAD) system is now installed on the space shuttle main engine (SSME) Technology Test Bed (TTB) at MSFC. The OPAD system requirements dictate the need for fast, efficient data processing techniques. To address this need of the OPAD system, a study was conducted into how artificial neural networks could be used to assist in the analysis of plume spectral data.

  17. Neural Network Solves "Traveling-Salesman" Problem

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  18. Analysis of IMS spectra using neural networks

    SciTech Connect

    Bell, S.E.

    1992-09-01

    Ion mobility spectrometry (IMS) has been used for over 20 years, and IMS coupled to gas chromatography (GC/IMS) has been used for over 10 years. There still is no systematic approach to IMS spectral interpretation such as exists for mass spectrometry and infrared spectrometry. Neural networks, a form of adaptive pattern recognition, were examined as a method of data reduction for IMS and GC/IMS. A wide variety of volatile organics were analyzed using IMS and GC/IMS and submitted to different networks for identification. Several different networks and data preprocessing algorithms were studied. A network was linked to a simple rule-based expert system and analyzed. The expert system was used to filter out false positive identifications made by the network using retention indices. The various network configurations were compared to other pattern recognition techniques, including human experts. The network performance was comparable to human experts, but responded much faster. Preliminary comparison of the network to other pattern recognition showed comparable performance. Linkage of the network output to the rule-based retention index system yielded the best performance.

  19. Analysis of IMS spectra using neural networks

    SciTech Connect

    Bell, S.E.

    1992-01-01

    Ion mobility spectrometry (IMS) has been used for over 20 years, and IMS coupled to gas chromatography (GC/IMS) has been used for over 10 years. There still is no systematic approach to IMS spectral interpretation such as exists for mass spectrometry and infrared spectrometry. Neural networks, a form of adaptive pattern recognition, were examined as a method of data reduction for IMS and GC/IMS. A wide variety of volatile organics were analyzed using IMS and GC/IMS and submitted to different networks for identification. Several different networks and data preprocessing algorithms were studied. A network was linked to a simple rule-based expert system and analyzed. The expert system was used to filter out false positive identifications made by the network using retention indices. The various network configurations were compared to other pattern recognition techniques, including human experts. The network performance was comparable to human experts, but responded much faster. Preliminary comparison of the network to other pattern recognition showed comparable performance. Linkage of the network output to the rule-based retention index system yielded the best performance.

  20. The next generation of neural network chips

    SciTech Connect

    Beiu, V.

    1997-08-01

    There have been many national and international neural networks research initiatives: USA (DARPA, NIBS), Canada (IRIS), Japan (HFSP) and Europe (BRAIN, GALA TEA, NERVES, ELENE NERVES 2) -- just to mention a few. Recent developments in the field of neural networks, cognitive science, bioengineering and electrical engineering have made it possible to understand more about the functioning of large ensembles of identical processing elements. There are more research papers than ever proposing solutions and hardware implementations are by no means an exception. Two fields (computing and neuroscience) are interacting in ways nobody could imagine just several years ago, and -- with the advent of new technologies -- researchers are focusing on trying to copy the Brain. Such an exciting confluence may quite shortly lead to revolutionary new computers and it is the aim of this invited session to bring to light some of the challenging research aspects dealing with the hardware realizability of future intelligent chips. Present-day (conventional) technology is (still) mostly digital and, thus, occupies wider areas and consumes much more power than the solutions envisaged. The innovative algorithmic and architectural ideals should represent important breakthroughs, paving the way towards making neural network chips available to the industry at competitive prices, in relatively small packages and consuming a fraction of the power required by equivalent digital solutions.

  1. CALIBRATION OF ONLINE ANALYZERS USING NEURAL NETWORKS

    SciTech Connect

    Rajive Ganguli; Daniel E. Walsh; Shaohai Yu

    2003-12-05

    Neural networks were used to calibrate an online ash analyzer at the Usibelli Coal Mine, Healy, Alaska, by relating the Americium and Cesium counts to the ash content. A total of 104 samples were collected from the mine, with 47 being from screened coal, and the rest being from unscreened coal. Each sample corresponded to 20 seconds of coal on the running conveyor belt. Neural network modeling used the quick stop training procedure. Therefore, the samples were split into training, calibration and prediction subsets. Special techniques, using genetic algorithms, were developed to representatively split the sample into the three subsets. Two separate approaches were tried. In one approach, the screened and unscreened coal was modeled separately. In another, a single model was developed for the entire dataset. No advantage was seen from modeling the two subsets separately. The neural network method performed very well on average but not individually, i.e. though each prediction was unreliable, the average of a few predictions was close to the true average. Thus, the method demonstrated that the analyzers were accurate at 2-3 minutes intervals (average of 6-9 samples), but not at 20 seconds (each prediction).

  2. Efficient implementation of neural network deinterlacing

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee

    2009-02-01

    Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.

  3. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. . Dept. of Nuclear Engineering Oak Ridge National Lab., TN )

    1992-01-01

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  4. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. |

    1992-12-31

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  5. Multiresolution training of Kohonen neural networks

    NASA Astrophysics Data System (ADS)

    Tamir, Dan E.

    2007-09-01

    This paper analyses a trade-off between convergence rate and distortion obtained through a multi-resolution training of a Kohonen Competitive Neural Network. Empirical results show that a multi-resolution approach can improve the training stage of several unsupervised pattern classification algorithms including K-means clustering, LBG vector quantization, and competitive neural networks. While, previous research concentrated on convergence rate of on-line unsupervised training. New results, reported in this paper, show that the multi-resolution approach can be used to improve training quality (measured as a derivative of the rate distortion function) on the account of convergence speed. The probability of achieving a desired point in the quality/convergence-rate space of Kohonen Competitive Neural Networks (KCNN) is evaluated using a detailed Monte Carlo set of experiments. It is shown that multi-resolution can reduce the distortion by a factor of 1.5 to 6 while maintaining the convergence rate of traditional KCNN. Alternatively, the convergence rate can be improved without loss of quality. The experiments include a controlled set of synthetic data, as well as, image data. Experimental results are reported and evaluated.

  6. Deep learning in neural networks: an overview.

    PubMed

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. PMID:25462637

  7. A space-time neural network

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1991-01-01

    Introduced here is a novel technique which adds the dimension of time to the well known back propagation neural network algorithm. Cited here are several reasons why the inclusion of automated spatial and temporal associations are crucial to effective systems modeling. An overview of other works which also model spatiotemporal dynamics is furnished. A detailed description is given of the processes necessary to implement the space-time network algorithm. Several demonstrations that illustrate the capabilities and performance of this new architecture are given.

  8. Evaluation of pan evaporation modeling with two different neural networks and weather station data

    NASA Astrophysics Data System (ADS)

    Kim, Sungwon; Singh, Vijay P.; Seo, Youngmin

    2014-07-01

    This study evaluates neural networks models for estimating daily pan evaporation for inland and coastal stations in Republic of Korea. A multilayer perceptron neural networks model (MLP-NNM) and a cascade correlation neural networks model (CCNNM) are developed for local implementation. Five-input models (MLP 5 and CCNNM 5) are generally found to be the best for local implementation. The optimal neural networks models, including MLP 4, MLP 5, CCNNM 4, and CCNNM 5, perform well for homogeneous (cross-stations 1 and 2) and nonhomogeneous (cross-stations 3 and 4) weather stations. Statistical results of CCNNM are better than those of MLP-NNM during the test period for homogeneous and nonhomogeneous weather stations except for MLP 4 being better in BUS-DAE and POH-DAE, and MLP 5 being better in POH-DAE. Applying the conventional models for the test period, it is found that neural networks models perform better than the conventional models for local, homogeneous, and nonhomogeneous weather stations.

  9. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-02-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. PMID:26230967

  10. Desynchronization in diluted neural networks

    SciTech Connect

    Zillmer, Ruediger; Livi, Roberto; Politi, Antonio; Torcini, Alessandro

    2006-09-15

    The dynamical behavior of a weakly diluted fully inhibitory network of pulse-coupled spiking neurons is investigated. Upon increasing the coupling strength, a transition from regular to stochasticlike regime is observed. In the weak-coupling phase, a periodic dynamics is rapidly approached, with all neurons firing with the same rate and mutually phase locked. The strong-coupling phase is characterized by an irregular pattern, even though the maximum Lyapunov exponent is negative. The paradox is solved by drawing an analogy with the phenomenon of 'stable chaos', i.e., by observing that the stochasticlike behavior is 'limited' to an exponentially long (with the system size) transient. Remarkably, the transient dynamics turns out to be stationary.

  11. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  12. Classification of multisensor remote-sensing images by structured neural networks

    SciTech Connect

    Serpico, S.B.; Roli, F.

    1995-05-01

    This paper proposes the application of structured neural networks to classification of multisensor remote-sensing images. The purpose of the authors` approach is to allow the interpretation of the ``network behavior,`` as it can be utilized by photointerpreters for the validation of the neural classifier. In addition, their approach gives a criterion for defining the network architecture, so avoiding the classical trial-and-error process. First of all, the architecture of structured multilayer feedforward networks is tailored to a multisensor classification problem. Then, such networks are trained to solve the problem by the error backpropagation algorithm. Finally, they are transformed into equivalent networks to obtain simplified representation. The resulting equivalent networks may be interpreted as a hierarchical arrangement of ``committees`` that accomplish the classification task by checking on a set of explicit constraints on input data. Experimental results on a multisensor (optical and SAR) data set are described in terms of both classification accuracy and network interpretation. Comparisons with fully connected neural networks and with the k-nearest neighbor classifier are also made.

  13. A neural network short-term forecast of significant thunderstorms

    SciTech Connect

    Mccann, D.W. )

    1992-09-01

    Neural networks, an artificial-intelligence tools that excels in pattern recognition, are reviewed, and a 3-7-h significant thunderstorm forecast developed with this technique is discussed. Two neural networks learned to forecast significant thunderstorms from fields of surface-based lifted index and surface moisture convergence. These networks are sensitive to the patterns that skilled forecasters recognize as occurring prior to strong thunderstorms. The two neural networks are combined operationally at the National Severe Storm Forecast Center into a single hourly product that enhances pattern-recognition skills. Examples of neural network products are shown, and their potential impact on significant thunderstorm forecasting is demonstrated. 22 refs.

  14. Seismic active control by neural networks.

    SciTech Connect

    Tang, Y.

    1998-01-01

    A study on the application of artificial neural networks (ANNs) to activate structural control under seismic loads is carried out. The structure considered is a single-degree-of-freedom (SDF) system with an active bracing device. The control force is computed by a trained neural network. The feed-forward neural network architecture and an adaptive back-propagation training algorithm is used in the study. The neural net is trained to reproduce the function that represents the response-excitation relationship of the SDF system under seismic loads. The input-output training patterns are generated randomly. In the back-propagation training algorithm, the learning rate is determined by ensuring the decrease of the error function at each epoch. The computer program implemented is validated by solving the classification of the XOR problem. Then, the trained ANN is used to compute the control force according to the control strategy. If the control force exceeds the actuator's capacity limit, it is set equal to that limit. The concept of the control strategy employed herein is to apply the control force at every time step to cancel the system velocity induced at the preceding time step so that the gradual rhythmic buildup of the response is destroyed. The ground motions considered in the numerical example are the 1940 El Centro earthquake and the 1979 Imperial Valley earthquake in California. The system responses with and without the control are calculated and compared. The feasibility and potential of applying ANNs to seismic active control is asserted by the promising results obtained from the numerical examples studied.

  15. Automated brain segmentation using neural networks

    NASA Astrophysics Data System (ADS)

    Powell, Stephanie; Magnotta, Vincent; Johnson, Hans; Andreasen, Nancy

    2006-03-01

    Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like that being collected in several on going multi-center studies. We have previously reported on using artificial neural networks (ANN) to define subcortical brain structures such as the thalamus (0.825), caudate (0.745), and putamen (0.755). One of the inputs into the ANN is the apriori probability of a structure existing at a given location. In this previous work, the apriori probability information was generated in Talairach space using a piecewise linear registration. In this work we have increased the dimensionality of this registration using Thirion's demons registration algorithm. The input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. The output of the neural network determined if the voxel was defined as one of the N regions used for training. Training was performed using a standard back propagation algorithm. The ANN was trained on a set of 15 images for 750,000,000 iterations. The resulting ANN weights were then applied to 6 test images not part of the training set. Relative overlap calculated for each structure was 0.875 for the thalamus, 0.845 for the caudate, and 0.814 for the putamen. With the modifications on the neural net algorithm and the use of multi-dimensional registration, we found substantial improvement in the automated segmentation method. The resulting segmented structures are as reliable as manual raters and the output of the neural network can be used without additional rater intervention.

  16. Novel maximum-margin training algorithms for supervised neural networks.

    PubMed

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  17. Detection of Wildfires with Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Umphlett, B.; Leeman, J.; Morrissey, M. L.

    2011-12-01

    Currently fire detection for the National Oceanic and Atmospheric Administration (NOAA) using satellite data is accomplished with algorithms and error checking human analysts. Artificial neural networks (ANNs) have been shown to be more accurate than algorithms or statistical methods for applications dealing with multiple datasets of complex observed data in the natural sciences. ANNs also deal well with multiple data sources that are not all equally reliable or equally informative to the problem. An ANN was tested to evaluate its accuracy in detecting wildfires utilizing polar orbiter numerical data from the Advanced Very High Resolution Radiometer (AVHRR). Datasets containing locations of known fires were gathered from the NOAA's polar orbiting satellites via the Comprehensive Large Array-data Stewardship System (CLASS). The data was then calibrated and navigation corrected using the Environment for Visualizing Images (ENVI). Fires were located with the aid of shapefiles generated via ArcGIS. Afterwards, several smaller ten pixel by ten pixel datasets were created for each fire (using the ENVI corrected data). Several datasets were created for each fire in order to vary fire position and avoid training the ANN to look only at fires in the center of an image. Datasets containing no fires were also created. A basic pattern recognition neural network was established with the MATLAB neural network toolbox. The datasets were then randomly separated into categories used to train, validate, and test the ANN. To prevent over fitting of the data, the mean squared error (MSE) of the network was monitored and training was stopped when the MSE began to rise. Networks were tested using each channel of the AVHRR data independently, channels 3a and 3b combined, and all six channels. The number of hidden neurons for each input set was also varied between 5-350 in steps of 5 neurons. Each configuration was run 10 times, totaling about 4,200 individual network evaluations. Thirty

  18. Extreme events in multilayer, interdependent complex networks and control

    NASA Astrophysics Data System (ADS)

    Chen, Yu-Zhong; Huang, Zi-Gang; Zhang, Hai-Feng; Eisenberg, Daniel; Seager, Thomas P.; Lai, Ying-Cheng

    2015-11-01

    We investigate the emergence of extreme events in interdependent networks. We introduce an inter-layer traffic resource competing mechanism to account for the limited capacity associated with distinct network layers. A striking finding is that, when the number of network layers and/or the overlap among the layers are increased, extreme events can emerge in a cascading manner on a global scale. Asymptotically, there are two stable absorption states: a state free of extreme events and a state of full of extreme events, and the transition between them is abrupt. Our results indicate that internal interactions in the multiplex system can yield qualitatively distinct phenomena associated with extreme events that do not occur for independent network layers. An implication is that, e.g., public resource competitions among different service providers can lead to a higher resource requirement than naively expected. We derive an analytical theory to understand the emergence of global-scale extreme events based on the concept of effective betweenness. We also articulate a cost-effective control scheme through increasing the capacity of very few hubs to suppress the cascading process of extreme events so as to protect the entire multi-layer infrastructure against global-scale breakdown.

  19. Neural network computer simulation of medical aerosols.

    PubMed

    Richardson, C J; Barlow, D J

    1996-06-01

    Preliminary investigations have been conducted to assess the potential for using artificial neural networks to simulate aerosol behaviour, with a view to employing this type of methodology in the evaluation and design of pulmonary drug-delivery systems. Details are presented of the general purpose software developed for these tasks; it implements a feed-forward back-propagation algorithm with weight decay and connection pruning, the user having complete run-time control of the network architecture and mode of training. A series of exploratory investigations is then reported in which different network structures and training strategies are assessed in terms of their ability to simulate known patterns of fluid flow in simple model systems. The first of these involves simulations of cellular automata-generated data for fluid flow through a partially obstructed two-dimensional pipe. The artificial neural networks are shown to be highly successful in simulating the behaviour of this simple linear system, but with important provisos relating to the information content of the training data and the criteria used to judge when the network is properly trained. A second set of investigations is then reported in which similar networks are used to simulate patterns of fluid flow through aerosol generation devices, using training data furnished through rigorous computational fluid dynamics modelling. These more complex three-dimensional systems are modelled with equal success. It is concluded that carefully tailored, well trained networks could provide valuable tools not just for predicting but also for analysing the spatial dynamics of pharmaceutical aerosols. PMID:8832491

  20. Marginalization in Random Nonlinear Neural Networks

    NASA Astrophysics Data System (ADS)

    Vasudeva Raju, Rajkumar; Pitkow, Xaq

    2015-03-01

    Computations involved in tasks like causal reasoning in the brain require a type of probabilistic inference known as marginalization. Marginalization corresponds to averaging over irrelevant variables to obtain the probability of the variables of interest. This is a fundamental operation that arises whenever input stimuli depend on several variables, but only some are task-relevant. Animals often exhibit behavior consistent with marginalizing over some variables, but the neural substrate of this computation is unknown. It has been previously shown (Beck et al. 2011) that marginalization can be performed optimally by a deterministic nonlinear network that implements a quadratic interaction of neural activity with divisive normalization. We show that a simpler network can perform essentially the same computation. These Random Nonlinear Networks (RNN) are feedforward networks with one hidden layer, sigmoidal activation functions, and normally-distributed weights connecting the input and hidden layers. We train the output weights connecting the hidden units to an output population, such that the output model accurately represents a desired marginal probability distribution without significant information loss compared to optimal marginalization. Simulations for the case of linear coordinate transformations show that the RNN model has good marginalization performance, except for highly uncertain inputs that have low amplitude population responses. Behavioral experiments, based on these results, could then be used to identify if this model does indeed explain how the brain performs marginalization.

  1. Neural Network Model of Memory Retrieval

    PubMed Central

    Recanatesi, Stefano; Katkov, Mikhail; Romani, Sandro; Tsodyks, Misha

    2015-01-01

    Human memory can store large amount of information. Nevertheless, recalling is often a challenging task. In a classical free recall paradigm, where participants are asked to repeat a briefly presented list of words, people make mistakes for lists as short as 5 words. We present a model for memory retrieval based on a Hopfield neural network where transition between items are determined by similarities in their long-term memory representations. Meanfield analysis of the model reveals stable states of the network corresponding (1) to single memory representations and (2) intersection between memory representations. We show that oscillating feedback inhibition in the presence of noise induces transitions between these states triggering the retrieval of different memories. The network dynamics qualitatively predicts the distribution of time intervals required to recall new memory items observed in experiments. It shows that items having larger number of neurons in their representation are statistically easier to recall and reveals possible bottlenecks in our ability of retrieving memories. Overall, we propose a neural network model of information retrieval broadly compatible with experimental observations and is consistent with our recent graphical model (Romani et al., 2013). PMID:26732491

  2. A review and analysis of neural networks for classification of remotely sensed multispectral imagery

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1993-01-01

    A literature survey and analysis of the use of neural networks for the classification of remotely sensed multispectral imagery is presented. As part of a brief mathematical review, the backpropagation algorithm, which is the most common method of training multi-layer networks, is discussed with an emphasis on its application to pattern recognition. The analysis is divided into five aspects of neural network classification: (1) input data preprocessing, structure, and encoding; (2) output encoding and extraction of classes; (3) network architecture, (4) training algorithms; and (5) comparisons to conventional classifiers. The advantages of the neural network method over traditional classifiers are its non-parametric nature, arbitrary decision boundary capabilities, easy adaptation to different types of data and input structures, fuzzy output values that can enhance classification, and good generalization for use with multiple images. The disadvantages of the method are slow training time, inconsistent results due to random initial weights, and the requirement of obscure initialization values (e.g., learning rate and hidden layer size). Possible techniques for ameliorating these problems are discussed. It is concluded that, although the neural network method has several unique capabilities, it will become a useful tool in remote sensing only if it is made faster, more predictable, and easier to use.

  3. Sparse coding for layered neural networks

    NASA Astrophysics Data System (ADS)

    Katayama, Katsuki; Sakata, Yasuo; Horiguchi, Tsuyoshi

    2002-07-01

    We investigate storage capacity of two types of fully connected layered neural networks with sparse coding when binary patterns are embedded into the networks by a Hebbian learning rule. One of them is a layered network, in which a transfer function of even layers is different from that of odd layers. The other is a layered network with intra-layer connections, in which the transfer function of inter-layer is different from that of intra-layer, and inter-layered neurons and intra-layered neurons are updated alternately. We derive recursion relations for order parameters by means of the signal-to-noise ratio method, and then apply the self-control threshold method proposed by Dominguez and Bollé to both layered networks with monotonic transfer functions. We find that a critical value αC of storage capacity is about 0.11|a ln a| -1 ( a≪1) for both layered networks, where a is a neuronal activity. It turns out that the basin of attraction is larger for both layered networks when the self-control threshold method is applied.

  4. Multistage neural network model for dynamic scene analysis

    SciTech Connect

    Ajjimarangsee, P.

    1989-01-01

    This research is concerned with dynamic scene analysis. The goal of scene analysis is to recognize objects and have a meaningful interpretation of the scene from which images are obtained. The task of the dynamic scene analysis process generally consists of region identification, motion analysis and object recognition. The objective of this research is to develop clustering algorithms using neural network approach and to investigate a multi-stage neural network model for region identification and motion analysis. The research is separated into three parts. First, a clustering algorithm using Kohonens' self-organizing feature map network is developed to be capable of generating continuous membership valued outputs. A newly developed version of the updating algorithm of the network is introduced to achieve a high degree of parallelism. A neural network model for the fuzzy c-means algorithm is proposed. In the second part, the parallel algorithms of a neural network model for clustering using the self-organizing feature maps approach and a neural network that models the fuzzy c-means algorithm are modified for implementation on a distributed memory parallel architecture. In the third part, supervised and unsupervised neural network models for motion analysis are investigated. For a supervised neural network, a three layer perceptron network is trained by a series of images to recognize the movement of the objects. For the unsupervised neural network, a self-organizing feature mapping network will learn to recognize the movement of the objects without an explicit training phase.

  5. The strategic organizational use of neural networks: An exploratory study

    SciTech Connect

    Wilson, R.L.

    1990-01-01

    Management of emerging technologies in organizations may be handled by neural networks, a brain metaphor' of information processing. In this study, technical and managerial issues surrounding the implementation of a neural network in an organizational decision setting are investigated. The study has three main emphases. (1) An exploratory experimental effort studied the effects of a number of technical implementation factors on accuracy of a trained neural network. Results indicated that composition of the training set evaluation set can significantly effect the actual and perceived decision-making accuracy. (2) A decision-support framework illustrated further important issues that must be considered in appropriately using a neural network. The importance of using a multiplicity of trained networks to assist the decision-making process was shown. (3) It was shown how a neural-network approach provides improved managerial decision support for product screening. The study illustrated that proper use of neural information processing can provide significant organizational benefits.

  6. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    PubMed Central

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  7. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    PubMed

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  8. Facial expression recognition using constructive neural networks

    NASA Astrophysics Data System (ADS)

    Ma, Liying; Khorasani, Khashayar

    2001-08-01

    The computer-based recognition of facial expressions has been an active area of research for quite a long time. The ultimate goal is to realize intelligent and transparent communications between human beings and machines. The neural network (NN) based recognition methods have been found to be particularly promising, since NN is capable of implementing mapping from the feature space of face images to the facial expression space. However, finding a proper network size has always been a frustrating and time consuming experience for NN developers. In this paper, we propose to use the constructive one-hidden-layer feed forward neural networks (OHL-FNNs) to overcome this problem. The constructive OHL-FNN will obtain in a systematic way a proper network size which is required by the complexity of the problem being considered. Furthermore, the computational cost involved in network training can be considerably reduced when compared to standard back- propagation (BP) based FNNs. In our proposed technique, the 2-dimensional discrete cosine transform (2-D DCT) is applied over the entire difference face image for extracting relevant features for recognition purpose. The lower- frequency 2-D DCT coefficients obtained are then used to train a constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive learning process to reduce the network size without sacrificing the performance of the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having 5 facial expression images (neutral, smile, anger, sadness, and surprise). Images of 40 men are used for network training, and the remaining images are used for generalization and

  9. Applying neural networks to ultrasonographic texture recognition

    NASA Astrophysics Data System (ADS)

    Gallant, Jean-Francois; Meunier, Jean; Stampfler, Robert; Cloutier, Jocelyn

    1993-09-01

    A neural network was trained to classify ultrasound image samples of normal, adenomatous (benign tumor) and carcinomatous (malignant tumor) thyroid gland tissue. The samples themselves, as well as their Fourier spectrum, miscellaneous cooccurrence matrices and 'generalized' cooccurrence matrices, were successively submitted to the network, to determine if it could be trained to identify discriminating features of the texture of the image, and if not, which feature extractor would give the best results. Results indicate that the network could indeed extract some distinctive features from the textures, since it could accomplish a partial classification when trained with the samples themselves. But a significant improvement both in learning speed and performance was observed when it was trained with the generalized cooccurrence matrices of the samples.

  10. DC motor speed control using neural networks

    NASA Astrophysics Data System (ADS)

    Tai, Heng-Ming; Wang, Junli; Kaveh, Ashenayi

    1990-08-01

    This paper presents a scheme that uses a feedforward neural network for the learning and generalization of the dynamic characteristics for the starting of a dc motor. The goal is to build an intelligent motor starter which has a versatility equivalent to that possessed by a human operator. To attain a fast and safe starting from stall for a dc motor a maximum armature current should be maintained during the starting period. This can be achieved by properly adjusting the armature voltage. The network is trained to learn the inverse dynamics of the motor starting characteristics and outputs a proper armature voltage. Simulation was performed to demonstrate the feasibility and effectiveness of the model. This study also addresses the network performance as a function of the number of hidden units and the number of training samples. 1.

  11. Dynamic Artificial Neural Networks with Affective Systems

    PubMed Central

    Schuman, Catherine D.; Birdwell, J. Douglas

    2013-01-01

    Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance. PMID:24303015

  12. The multilayer temporal network of public transport in Great Britain

    PubMed Central

    Gallotti, Riccardo; Barthelemy, Marc

    2015-01-01

    Despite the widespread availability of information concerning public transport coming from different sources, it is extremely hard to have a complete picture, in particular at a national scale. Here, we integrate timetable data obtained from the United Kingdom open-data program together with timetables of domestic flights, and obtain a comprehensive snapshot of the temporal characteristics of the whole UK public transport system for a week in October 2010. In order to focus on multi-modal aspects of the system, we use a coarse graining procedure and define explicitly the coupling between different transport modes such as connections at airports, ferry docks, rail, metro, coach and bus stations. The resulting weighted, directed, temporal and multilayer network is provided in simple, commonly used formats, ensuring easy access and the possibility of a straightforward use of old or specifically developed methods on this new and extensive dataset. PMID:25977806

  13. Prediction of Force Measurements of a Microbend Sensor Based on an Artificial Neural Network

    PubMed Central

    Efendioglu, Hasan S.; Yildirim, Tulay; Fidanboylu, Kemal

    2009-01-01

    Artificial neural network (ANN) based prediction of the response of a microbend fiber optic sensor is presented. To the best of our knowledge no similar work has been previously reported in the literature. Parallel corrugated plates with three deformation cycles, 6 mm thickness of the spacer material and 16 mm mechanical periodicity between deformations were used in the microbend sensor. Multilayer Perceptron (MLP) with different training algorithms, Radial Basis Function (RBF) network and General Regression Neural Network (GRNN) are used as ANN models in this work. All of these models can predict the sensor responses with considerable errors. RBF has the best performance with the smallest mean square error (MSE) values of training and test results. Among the MLP algorithms and GRNN the Levenberg-Marquardt algorithm has good results. These models successfully predict the sensor responses, hence ANNs can be used as useful tool in the design of more robust fiber optic sensors. PMID:22399991

  14. Artificial neural networks in laboratory medicine and medical outcome prediction.

    PubMed

    Tafeit, E; Reibnegger, G

    1999-09-01

    Since the early nineties the number of scientific papers reporting on artificial neural network (ANN) applications in medicine has been quickly increasing. In the present paper, we describe in some detail the architecture of network types used most frequently in ANN applications in the broad field of laboratory medicine and clinical chemistry, present a technique-structured review about the recent ANN applications in the field, and give information about the improvements of available ANN software packages. ANN applications are divided into two main classes: supervised and unsupervised methods. Most of the described supervised applications belong to the fields of medical diagnosis (n = 7) and outcome prediction (n = 9). Laboratory and clinical data are presented to multilayer feed-forward ANNs which are trained by the back propagation algorithm. Results are often better than those of traditional techniques such as linear discriminant analysis, classification and regression trees (CART), Cox regression analysis, logistic regression, clinical judgement or expert systems. Unsupervised ANN applications provide the ability of reducing the dimensionality of a dataset. Low-dimensional plots can be generated and visually understood and compared. Results are very similar to that of cluster analysis and factor analysis. The ability of Kohonen's self-organizing maps to generate 2D maps of molecule surface properties was successfully applied in drug design. PMID:10596951

  15. Review of feed forward neural network classification preprocessing techniques

    NASA Astrophysics Data System (ADS)

    Asadi, Roya; Kareem, Sameem Abdul

    2014-06-01

    The best feature of artificial intelligent Feed Forward Neural Network (FFNN) classification models is learning of input data through their weights. Data preprocessing and pre-training are the contributing factors in developing efficient techniques for low training time and high accuracy of classification. In this study, we investigate and review the powerful preprocessing functions of the FFNN models. Currently initialization of the weights is at random which is the main source of problems. Multilayer auto-encoder networks as the latest technique like other related techniques is unable to solve the problems. Weight Linear Analysis (WLA) is a combination of data pre-processing and pre-training to generate real weights through the use of normalized input values. The FFNN model by using the WLA increases classification accuracy and improve training time in a single epoch without any training cycle, the gradient of the mean square error function, updating the weights. The results of comparison and evaluation show that the WLA is a powerful technique in the FFNN classification area yet.

  16. Target discrimination in synthetic aperture radar using artificial neural networks.

    PubMed

    Principe, J C; Kim, M; Fisher, M

    1998-01-01

    This paper addresses target discrimination in synthetic aperture radar (SAR) imagery using linear and nonlinear adaptive networks. Neural networks are extensively used for pattern classification but here the goal is discrimination. We show that the two applications require different cost functions. We start by analyzing with a pattern recognition perspective the two-parameter constant false alarm rate (CFAR) detector which is widely utilized as a target detector in SAR. Then we generalize its principle to construct the quadratic gamma discriminator (QGD), a nonparametrically trained classifier based on local image intensity. The linear processing element of the QCD is further extended with nonlinearities yielding a multilayer perceptron (MLP) which we call the NL-QGD (nonlinear QGD). MLPs are normally trained based on the L(2) norm. We experimentally show that the L(2) norm is not recommended to train MLPs for discriminating targets in SAR. Inspired by the Neyman-Pearson criterion, we create a cost function based on a mixed norm to weight the false alarms and the missed detections differently. Mixed norms can easily be incorporated into the backpropagation algorithm, and lead to better performance. Several other norms (L(8), cross-entropy) are applied to train the NL-QGD and all outperformed the L(2) norm when validated by receiver operating characteristics (ROC) curves. The data sets are constructed from TABILS 24 ISAR targets embedded in 7 km(2) of SAR imagery (MIT/LL mission 90). PMID:18276330

  17. Classification of rapeseed and soybean oils by use of unsupervised pattern-recognition methods and neural networks.

    PubMed

    Wesołowski, M; Suchacz, B

    2001-10-01

    Unsupervised pattern-recognition methods and Kohonen neural networks have been applied to the classification of rapeseed and soybean oil samples according to their type and quality by use of chemical and physical properties (density, refractive index, saponification value, and iodine and acid numbers) and thermal properties (thermal decomposition temperatures) as variables. A multilayer feed-forward (MLF) neural network (NN) has been used to select the most important variables for accurate classification of edible oils. To accomplish this task different neural networks architectures trained by back propagation of error method, using chemical, physical, and thermal properties as inputs, were employed. The network with the best performance and the smallest root mean squared (RMS) error was chosen. The results of MLF network sensitivity analysis enabled the identification of key properties, which were again used as variables in principal components analysis (PCA), cluster analysis (CA), and in Kohonen self-organizing feature maps (SOFM) to prove their reliability. PMID:11688644

  18. One pass learning for generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance. PMID

  19. Training product unit neural networks with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Janson, D. J.; Frenzel, J. F.; Thelen, D. C.

    1991-01-01

    The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.

  20. Classification of behavior using unsupervised temporal neural networks

    SciTech Connect

    Adair, K.L.; Argo, P.

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem.