Sample records for feature extraction artificial

  1. FEX: A Knowledge-Based System For Planimetric Feature Extraction

    NASA Astrophysics Data System (ADS)

    Zelek, John S.

    1988-10-01

    Topographical planimetric features include natural surfaces (rivers, lakes) and man-made surfaces (roads, railways, bridges). In conventional planimetric feature extraction, a photointerpreter manually interprets and extracts features from imagery on a stereoplotter. Visual planimetric feature extraction is a very labour intensive operation. The advantages of automating feature extraction include: time and labour savings; accuracy improvements; and planimetric data consistency. FEX (Feature EXtraction) combines techniques from image processing, remote sensing and artificial intelligence for automatic feature extraction. The feature extraction process co-ordinates the information and knowledge in a hierarchical data structure. The system simulates the reasoning of a photointerpreter in determining the planimetric features. Present efforts have concentrated on the extraction of road-like features in SPOT imagery. Keywords: Remote Sensing, Artificial Intelligence (AI), SPOT, image understanding, knowledge base, apars.

  2. A Hybrid Neural Network and Feature Extraction Technique for Target Recognition.

    DTIC Science & Technology

    target features are extracted, the extracted data being evaluated in an artificial neural network to identify a target at a location within the image scene from which the different viewing angles extend.

  3. Wire bonding quality monitoring via refining process of electrical signal from ultrasonic generator

    NASA Astrophysics Data System (ADS)

    Feng, Wuwei; Meng, Qingfeng; Xie, Youbo; Fan, Hong

    2011-04-01

    In this paper, a technique for on-line quality detection of ultrasonic wire bonding is developed. The electrical signals from the ultrasonic generator supply, namely, voltage and current, are picked up by a measuring circuit and transformed into digital signals by a data acquisition system. A new feature extraction method is presented to characterize the transient property of the electrical signals and further evaluate the bond quality. The method includes three steps. First, the captured voltage and current are filtered by digital bandpass filter banks to obtain the corresponding subband signals such as fundamental signal, second harmonic, and third harmonic. Second, each subband envelope is obtained using the Hilbert transform for further feature extraction. Third, the subband envelopes are, respectively, separated into three phases, namely, envelope rising, stable, and damping phases, to extract the tiny waveform changes. The different waveform features are extracted from each phase of these subband envelopes. The principal components analysis (PCA) method is used for the feature selection in order to remove the relevant information and reduce the dimension of original feature variables. Using the selected features as inputs, an artificial neural network (ANN) is constructed to identify the complex bond fault pattern. By analyzing experimental data with the proposed feature extraction method and neural network, the results demonstrate the advantages of the proposed feature extraction method and the constructed artificial neural network in detecting and identifying bond quality.

  4. Cognitive and artificial representations in handwriting recognition

    NASA Astrophysics Data System (ADS)

    Lenaghan, Andrew P.; Malyan, Ron

    1996-03-01

    Both cognitive processes and artificial recognition systems may be characterized by the forms of representation they build and manipulate. This paper looks at how handwriting is represented in current recognition systems and the psychological evidence for its representation in the cognitive processes responsible for reading. Empirical psychological work on feature extraction in early visual processing is surveyed to show that a sound psychological basis for feature extraction exists and to describe the features this approach leads to. The first stage of the development of an architecture for a handwriting recognition system which has been strongly influenced by the psychological evidence for the cognitive processes and representations used in early visual processing, is reported. This architecture builds a number of parallel low level feature maps from raw data. These feature maps are thresholded and a region labeling algorithm is used to generate sets of features. Fuzzy logic is used to quantify the uncertainty in the presence of individual features.

  5. Artificial Neural Network Application in the Diagnosis of Disease Conditions with Liver Ultrasound Images

    PubMed Central

    Lele, Ramachandra Dattatraya; Joshi, Mukund; Chowdhary, Abhay

    2014-01-01

    The preliminary study presented within this paper shows a comparative study of various texture features extracted from liver ultrasonic images by employing Multilayer Perceptron (MLP), a type of artificial neural network, to study the presence of disease conditions. An ultrasound (US) image shows echo-texture patterns, which defines the organ characteristics. Ultrasound images of liver disease conditions such as “fatty liver,” “cirrhosis,” and “hepatomegaly” produce distinctive echo patterns. However, various ultrasound imaging artifacts and speckle noise make these echo-texture patterns difficult to identify and often hard to distinguish visually. Here, based on the extracted features from the ultrasonic images, we employed an artificial neural network for the diagnosis of disease conditions in liver and finding of the best classifier that distinguishes between abnormal and normal conditions of the liver. Comparison of the overall performance of all the feature classifiers concluded that “mixed feature set” is the best feature set. It showed an excellent rate of accuracy for the training data set. The gray level run length matrix (GLRLM) feature shows better results when the network was tested against unknown data. PMID:25332717

  6. Artificial bee colony algorithm for single-trial electroencephalogram analysis.

    PubMed

    Hsu, Wei-Yen; Hu, Ya-Ping

    2015-04-01

    In this study, we propose an analysis system combined with feature selection to further improve the classification accuracy of single-trial electroencephalogram (EEG) data. Acquiring event-related brain potential data from the sensorimotor cortices, the system comprises artifact and background noise removal, feature extraction, feature selection, and feature classification. First, the artifacts and background noise are removed automatically by means of independent component analysis and surface Laplacian filter, respectively. Several potential features, such as band power, autoregressive model, and coherence and phase-locking value, are then extracted for subsequent classification. Next, artificial bee colony (ABC) algorithm is used to select features from the aforementioned feature combination. Finally, selected subfeatures are classified by support vector machine. Comparing with and without artifact removal and feature selection, using a genetic algorithm on single-trial EEG data for 6 subjects, the results indicate that the proposed system is promising and suitable for brain-computer interface applications. © EEG and Clinical Neuroscience Society (ECNS) 2014.

  7. Classification of Respiratory Sounds by Using An Artificial Neural Network

    DTIC Science & Technology

    2001-10-28

    CLASSIFICATION OF RESPIRATORY SOUNDS BY USING AN ARTIFICIAL NEURAL NETWORK M.C. Sezgin, Z. Dokur, T. Ölmez, M. Korürek Department of Electronics and...successfully classified by the GAL network. Keywords-Respiratory Sounds, Classification of Biomedical Signals, Artificial Neural Network . I. INTRODUCTION...process, feature extraction, and classification by the artificial neural network . At first, the RS signal obtained from a real-time measurement equipment is

  8. Application of complex discrete wavelet transform in classification of Doppler signals using complex-valued artificial neural network.

    PubMed

    Ceylan, Murat; Ceylan, Rahime; Ozbay, Yüksel; Kara, Sadik

    2008-09-01

    In biomedical signal classification, due to the huge amount of data, to compress the biomedical waveform data is vital. This paper presents two different structures formed using feature extraction algorithms to decrease size of feature set in training and test data. The proposed structures, named as wavelet transform-complex-valued artificial neural network (WT-CVANN) and complex wavelet transform-complex-valued artificial neural network (CWT-CVANN), use real and complex discrete wavelet transform for feature extraction. The aim of using wavelet transform is to compress data and to reduce training time of network without decreasing accuracy rate. In this study, the presented structures were applied to the problem of classification in carotid arterial Doppler ultrasound signals. Carotid arterial Doppler ultrasound signals were acquired from left carotid arteries of 38 patients and 40 healthy volunteers. The patient group included 22 males and 16 females with an established diagnosis of the early phase of atherosclerosis through coronary or aortofemoropopliteal (lower extremity) angiographies (mean age, 59 years; range, 48-72 years). Healthy volunteers were young non-smokers who seem to not bear any risk of atherosclerosis, including 28 males and 12 females (mean age, 23 years; range, 19-27 years). Sensitivity, specificity and average detection rate were calculated for comparison, after training and test phases of all structures finished. These parameters have demonstrated that training times of CVANN and real-valued artificial neural network (RVANN) were reduced using feature extraction algorithms without decreasing accuracy rate in accordance to our aim.

  9. YADCLAN: yet another digitally-controlled linear artificial neuron.

    PubMed

    Frenger, Paul

    2003-01-01

    This paper updates the author's 1999 RMBS presentation on digitally controlled linear artificial neuron design. Each neuron is based on a standard operational amplifier having excitatory and inhibitory inputs, variable gain, an amplified linear analog output and an adjustable threshold comparator for digital output. This design employs a 1-wire serial network of digitally controlled potentiometers and resistors whose resistance values are set and read back under microprocessor supervision. This system embodies several unique and useful features, including: enhanced neuronal stability, dynamic reconfigurability and network extensibility. This artificial neuronal is being employed for feature extraction and pattern recognition in an advanced robotic application.

  10. Smell identification of spices using nanomechanical membrane-type surface stress sensors

    NASA Astrophysics Data System (ADS)

    Imamura, Gaku; Shiba, Kota; Yoshikawa, Genki

    2016-11-01

    Artificial olfaction, that is, a chemical sensor system that identifies samples by smell, has not been fully achieved because of the complex perceptional mechanism of olfaction. To realize an artificial olfactory system, not only an array of chemical sensors but also a valid feature extraction method is required. In this study, we achieved the identification of spices by smell using nanomechanical membrane-type surface stress sensors (MSS). Features were extracted from the sensing signals obtained from four MSS coated with different types of polymers, focusing on the chemical interactions between polymers and odor molecules. The principal component analysis (PCA) of the dataset consisting of the extracted parameters demonstrated the separation of each spice on the scatter plot. We discuss the strategy for improving odor identification based on the relationship between the results of PCA and the chemical species in the odors.

  11. A proto-architecture for innate directionally selective visual maps.

    PubMed

    Adams, Samantha V; Harris, Chris M

    2014-01-01

    Self-organizing artificial neural networks are a popular tool for studying visual system development, in particular the cortical feature maps present in real systems that represent properties such as ocular dominance (OD), orientation-selectivity (OR) and direction selectivity (DS). They are also potentially useful in artificial systems, for example robotics, where the ability to extract and learn features from the environment in an unsupervised way is important. In this computational study we explore a DS map that is already latent in a simple artificial network. This latent selectivity arises purely from the cortical architecture without any explicit coding for DS and prior to any self-organising process facilitated by spontaneous activity or training. We find DS maps with local patchy regions that exhibit features similar to maps derived experimentally and from previous modeling studies. We explore the consequences of changes to the afferent and lateral connectivity to establish the key features of this proto-architecture that support DS.

  12. Artificially intelligent recognition of Arabic speaker using voice print-based local features

    NASA Astrophysics Data System (ADS)

    Mahmood, Awais; Alsulaiman, Mansour; Muhammad, Ghulam; Akram, Sheeraz

    2016-11-01

    Local features for any pattern recognition system are based on the information extracted locally. In this paper, a local feature extraction technique was developed. This feature was extracted in the time-frequency plain by taking the moving average on the diagonal directions of the time-frequency plane. This feature captured the time-frequency events producing a unique pattern for each speaker that can be viewed as a voice print of the speaker. Hence, we referred to this technique as voice print-based local feature. The proposed feature was compared to other features including mel-frequency cepstral coefficient (MFCC) for speaker recognition using two different databases. One of the databases used in the comparison is a subset of an LDC database that consisted of two short sentences uttered by 182 speakers. The proposed feature attained 98.35% recognition rate compared to 96.7% for MFCC using the LDC subset.

  13. An expert system based on principal component analysis, artificial immune system and fuzzy k-NN for diagnosis of valvular heart diseases.

    PubMed

    Sengur, Abdulkadir

    2008-03-01

    In the last two decades, the use of artificial intelligence methods in medical analysis is increasing. This is mainly because the effectiveness of classification and detection systems have improved a great deal to help the medical experts in diagnosing. In this work, we investigate the use of principal component analysis (PCA), artificial immune system (AIS) and fuzzy k-NN to determine the normal and abnormal heart valves from the Doppler heart sounds. The proposed heart valve disorder detection system is composed of three stages. The first stage is the pre-processing stage. Filtering, normalization and white de-noising are the processes that were used in this stage. The feature extraction is the second stage. During feature extraction stage, wavelet packet decomposition was used. As a next step, wavelet entropy was considered as features. For reducing the complexity of the system, PCA was used for feature reduction. In the classification stage, AIS and fuzzy k-NN were used. To evaluate the performance of the proposed methodology, a comparative study is realized by using a data set containing 215 samples. The validation of the proposed method is measured by using the sensitivity and specificity parameters; 95.9% sensitivity and 96% specificity rate was obtained.

  14. A Neural Relevance Model for Feature Extraction from Hyperspectral Images, and Its Application in the Wavelet Domain

    DTIC Science & Technology

    2006-08-01

    Nikolas Avouris. Evaluation of classifiers for an uneven class distribution problem. Applied Artificial Intellegence , pages 1-24, 2006. Draft manuscript...data by a hybrid artificial neural network so we may evaluate the classification capabilities of the baseline GRLVQ and our improved GRLVQI. Chapter 4...performance of GRLVQ(I), we compare the results against a baseline classification of the 23-class problem with a hybrid artificial neural network (ANN

  15. A Spiking Neural Network in sEMG Feature Extraction.

    PubMed

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-11-03

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.

  16. Tool Wear Prediction in Ti-6Al-4V Machining through Multiple Sensor Monitoring and PCA Features Pattern Recognition.

    PubMed

    Caggiano, Alessandra

    2018-03-09

    Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features ( k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear ( VB max ) was achieved, with predicted values very close to the measured tool wear values.

  17. Tool Wear Prediction in Ti-6Al-4V Machining through Multiple Sensor Monitoring and PCA Features Pattern Recognition

    PubMed Central

    2018-01-01

    Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features (k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear (VBmax) was achieved, with predicted values very close to the measured tool wear values. PMID:29522443

  18. Classification of Partial Discharge Measured under Different Levels of Noise Contamination.

    PubMed

    Jee Keen Raymond, Wong; Illias, Hazlee Azil; Abu Bakar, Ab Halim

    2017-01-01

    Cable joint insulation breakdown may cause a huge loss to power companies. Therefore, it is vital to diagnose the insulation quality to detect early signs of insulation failure. It is well known that there is a correlation between Partial discharge (PD) and the insulation quality. Although many works have been done on PD pattern recognition, it is usually performed in a noise free environment. Also, works on PD pattern recognition in actual cable joint are less likely to be found in literature. Therefore, in this work, classifications of actual cable joint defect types from partial discharge data contaminated by noise were performed. Five cross-linked polyethylene (XLPE) cable joints with artificially created defects were prepared based on the defects commonly encountered on site. Three different types of input feature were extracted from the PD pattern under artificially created noisy environment. These include statistical features, fractal features and principal component analysis (PCA) features. These input features were used to train the classifiers to classify each PD defect types. Classifications were performed using three different artificial intelligence classifiers, which include Artificial Neural Networks (ANN), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Machine (SVM). It was found that the classification accuracy decreases with higher noise level but PCA features used in SVM and ANN showed the strongest tolerance against noise contamination.

  19. Automotive System for Remote Surface Classification.

    PubMed

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-04-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.

  20. Automotive System for Remote Surface Classification

    PubMed Central

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-01-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions. PMID:28368297

  1. Prediction of pork loin quality using online computer vision system and artificial intelligence model.

    PubMed

    Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David

    2018-06-01

    The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Comparison of ANN and SVM for classification of eye movements in EOG signals

    NASA Astrophysics Data System (ADS)

    Qi, Lim Jia; Alias, Norma

    2018-03-01

    Nowadays, electrooculogram is regarded as one of the most important biomedical signal in measuring and analyzing eye movement patterns. Thus, it is helpful in designing EOG-based Human Computer Interface (HCI). In this research, electrooculography (EOG) data was obtained from five volunteers. The (EOG) data was then preprocessed before feature extraction methods were employed to further reduce the dimensionality of data. Three feature extraction approaches were put forward, namely statistical parameters, autoregressive (AR) coefficients using Burg method, and power spectral density (PSD) using Yule-Walker method. These features would then become input to both artificial neural network (ANN) and support vector machine (SVM). The performance of the combination of different feature extraction methods and classifiers was presented and analyzed. It was found that statistical parameters + SVM achieved the highest classification accuracy of 69.75%.

  3. Concurrent evolution of feature extractors and modular artificial neural networks

    NASA Astrophysics Data System (ADS)

    Hannak, Victor; Savakis, Andreas; Yang, Shanchieh Jay; Anderson, Peter

    2009-05-01

    This paper presents a new approach for the design of feature-extracting recognition networks that do not require expert knowledge in the application domain. Feature-Extracting Recognition Networks (FERNs) are composed of interconnected functional nodes (feurons), which serve as feature extractors, and are followed by a subnetwork of traditional neural nodes (neurons) that act as classifiers. A concurrent evolutionary process (CEP) is used to search the space of feature extractors and neural networks in order to obtain an optimal recognition network that simultaneously performs feature extraction and recognition. By constraining the hill-climbing search functionality of the CEP on specific parts of the solution space, i.e., individually limiting the evolution of feature extractors and neural networks, it was demonstrated that concurrent evolution is a necessary component of the system. Application of this approach to a handwritten digit recognition task illustrates that the proposed methodology is capable of producing recognition networks that perform in-line with other methods without the need for expert knowledge in image processing.

  4. Automatic emotional expression analysis from eye area

    NASA Astrophysics Data System (ADS)

    Akkoç, Betül; Arslan, Ahmet

    2015-02-01

    Eyes play an important role in expressing emotions in nonverbal communication. In the present study, emotional expression classification was performed based on the features that were automatically extracted from the eye area. Fırst, the face area and the eye area were automatically extracted from the captured image. Afterwards, the parameters to be used for the analysis through discrete wavelet transformation were obtained from the eye area. Using these parameters, emotional expression analysis was performed through artificial intelligence techniques. As the result of the experimental studies, 6 universal emotions consisting of expressions of happiness, sadness, surprise, disgust, anger and fear were classified at a success rate of 84% using artificial neural networks.

  5. Classification of Partial Discharge Measured under Different Levels of Noise Contamination

    PubMed Central

    2017-01-01

    Cable joint insulation breakdown may cause a huge loss to power companies. Therefore, it is vital to diagnose the insulation quality to detect early signs of insulation failure. It is well known that there is a correlation between Partial discharge (PD) and the insulation quality. Although many works have been done on PD pattern recognition, it is usually performed in a noise free environment. Also, works on PD pattern recognition in actual cable joint are less likely to be found in literature. Therefore, in this work, classifications of actual cable joint defect types from partial discharge data contaminated by noise were performed. Five cross-linked polyethylene (XLPE) cable joints with artificially created defects were prepared based on the defects commonly encountered on site. Three different types of input feature were extracted from the PD pattern under artificially created noisy environment. These include statistical features, fractal features and principal component analysis (PCA) features. These input features were used to train the classifiers to classify each PD defect types. Classifications were performed using three different artificial intelligence classifiers, which include Artificial Neural Networks (ANN), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Machine (SVM). It was found that the classification accuracy decreases with higher noise level but PCA features used in SVM and ANN showed the strongest tolerance against noise contamination. PMID:28085953

  6. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2012-01-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  7. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2011-12-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classificationmore » rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.« less

  9. On-line Tool Wear Detection on DCMT070204 Carbide Tool Tip Based on Noise Cutting Audio Signal using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Prasetyo, T.; Amar, S.; Arendra, A.; Zam Zami, M. K.

    2018-01-01

    This study develops an on-line detection system to predict the wear of DCMT070204 tool tip during the cutting process of the workpiece. The machine used in this research is CNC ProTurn 9000 to cut ST42 steel cylinder. The audio signal has been captured using the microphone placed in the tool post and recorded in Matlab. The signal is recorded at the sampling rate of 44.1 kHz, and the sampling size of 1024. The recorded signal is 110 data derived from the audio signal while cutting using a normal chisel and a worn chisel. And then perform signal feature extraction in the frequency domain using Fast Fourier Transform. Feature selection is done based on correlation analysis. And tool wear classification was performed using artificial neural networks with 33 input features selected. This artificial neural network is trained with back propagation method. Classification performance testing yields an accuracy of 74%.

  10. Multiple degree of freedom optical pattern recognition

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1987-01-01

    Three general optical approaches to multiple degree of freedom object pattern recognition (where no stable object rest position exists) are advanced. These techniques include: feature extraction, correlation, and artificial intelligence. The details of the various processors are advanced together with initial results.

  11. LiDAR DTMs and anthropogenic feature extraction: testing the feasibility of geomorphometric parameters in floodplains

    NASA Astrophysics Data System (ADS)

    Sofia, G.; Tarolli, P.; Dalla Fontana, G.

    2012-04-01

    In floodplains, massive investments in land reclamation have always played an important role in the past for flood protection. In these contexts, human alteration is reflected by artificial features ('Anthropogenic features'), such as banks, levees or road scarps, that constantly increase and change, in response to the rapid growth of human populations. For these areas, various existing and emerging applications require up-to-date, accurate and sufficiently attributed digital data, but such information is usually lacking, especially when dealing with large-scale applications. More recently, National or Local Mapping Agencies, in Europe, are moving towards the generation of digital topographic information that conforms to reality and are highly reliable and up to date. LiDAR Digital Terrain Models (DTMs) covering large areas are readily available for public authorities, and there is a greater and more widespread interest in the application of such information by agencies responsible for land management for the development of automated methods aimed at solving geomorphological and hydrological problems. Automatic feature recognition based upon DTMs can offer, for large-scale applications, a quick and accurate method that can help in improving topographic databases, and that can overcome some of the problems associated with traditional, field-based, geomorphological mapping, such as restrictions on access, and constraints of time or costs. Although anthropogenic features as levees and road scarps are artificial structures that actually do not belong to what is usually defined as the bare ground surface, they are implicitly embedded in digital terrain models (DTMs). Automatic feature recognition based upon DTMs, therefore, can offer a quick and accurate method that does not require additional data, and that can help in improving flood defense asset information, flood modeling or other applications. In natural contexts, morphological indicators derived from high resolution topography have been proven to be reliable for feasible applications. The use of statistical operators as thresholds for these geomorphic parameters, furthermore, showed a high reliability for feature extraction in mountainous environments. The goal of this research is to test if these morphological indicators and objective thresholds can be feasible also in floodplains, where features assume different characteristics and other artificial disturbances might be present. In the work, three different geomorphic parameters are tested and applied at different scales on a LiDAR DTM of typical alluvial plain's area in the North East of Italy. The box-plot is applied to identify the threshold for feature extraction, and a filtering procedure is proposed, to improve the quality of the final results. The effectiveness of the different geomorphic parameters is analyzed, comparing automatically derived features with the surveyed ones. The results highlight the capability of high resolution topography, geomorphic indicators and statistical thresholds for anthropogenic features extraction and characterization in a floodplains context.

  12. Online signature recognition using principal component analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Hwang, Seung-Jun; Park, Seung-Je; Baek, Joong-Hwan

    2016-12-01

    In this paper, we propose an algorithm for on-line signature recognition using fingertip point in the air from the depth image acquired by Kinect. We extract 10 statistical features from X, Y, Z axis, which are invariant to changes in shifting and scaling of the signature trajectories in three-dimensional space. Artificial neural network is adopted to solve the complex signature classification problem. 30 dimensional features are converted into 10 principal components using principal component analysis, which is 99.02% of total variances. We implement the proposed algorithm and test to actual on-line signatures. In experiment, we verify the proposed method is successful to classify 15 different on-line signatures. Experimental result shows 98.47% of recognition rate when using only 10 feature vectors.

  13. Recognition of Similar Shaped Handwritten Marathi Characters Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Jane, Archana P.; Pund, Mukesh A.

    2012-03-01

    The growing need have handwritten Marathi character recognition in Indian offices such as passport, railways etc has made it vital area of a research. Similar shape characters are more prone to misclassification. In this paper a novel method is provided to recognize handwritten Marathi characters based on their features extraction and adaptive smoothing technique. Feature selections methods avoid unnecessary patterns in an image whereas adaptive smoothing technique form smooth shape of charecters.Combination of both these approaches leads to the better results. Previous study shows that, no one technique achieves 100% accuracy in handwritten character recognition area. This approach of combining both adaptive smoothing & feature extraction gives better results (approximately 75-100) and expected outcomes.

  14. Invariant-feature-based adaptive automatic target recognition in obscured 3D point clouds

    NASA Astrophysics Data System (ADS)

    Khuon, Timothy; Kershner, Charles; Mattei, Enrico; Alverio, Arnel; Rand, Robert

    2014-06-01

    Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area. The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.

  15. Target detection method by airborne and spaceborne images fusion based on past images

    NASA Astrophysics Data System (ADS)

    Chen, Shanjing; Kang, Qing; Wang, Zhenggang; Shen, ZhiQiang; Pu, Huan; Han, Hao; Gu, Zhongzheng

    2017-11-01

    To solve the problem that remote sensing target detection method has low utilization rate of past remote sensing data on target area, and can not recognize camouflage target accurately, a target detection method by airborne and spaceborne images fusion based on past images is proposed in this paper. The target area's past of space remote sensing image is taken as background. The airborne and spaceborne remote sensing data is fused and target feature is extracted by the means of airborne and spaceborne images registration, target change feature extraction, background noise suppression and artificial target feature extraction based on real-time aerial optical remote sensing image. Finally, the support vector machine is used to detect and recognize the target on feature fusion data. The experimental results have established that the proposed method combines the target area change feature of airborne and spaceborne remote sensing images with target detection algorithm, and obtains fine detection and recognition effect on camouflage and non-camouflage targets.

  16. A Target-Less Vision-Based Displacement Sensor Based on Image Convex Hull Optimization for Measuring the Dynamic Response of Building Structures.

    PubMed

    Choi, Insub; Kim, JunHee; Kim, Donghyun

    2016-12-08

    Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called "TVDS") is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image.

  17. 3D fluid-structure modelling and vibration analysis for fault diagnosis of Francis turbine using multiple ANN and multiple ANFIS

    NASA Astrophysics Data System (ADS)

    Saeed, R. A.; Galybin, A. N.; Popov, V.

    2013-01-01

    This paper discusses condition monitoring and fault diagnosis in Francis turbine based on integration of numerical modelling with several different artificial intelligence (AI) techniques. In this study, a numerical approach for fluid-structure (turbine runner) analysis is presented. The results of numerical analysis provide frequency response functions (FRFs) data sets along x-, y- and z-directions under different operating load and different position and size of faults in the structure. To extract features and reduce the dimensionality of the obtained FRF data, the principal component analysis (PCA) has been applied. Subsequently, the extracted features are formulated and fed into multiple artificial neural networks (ANN) and multiple adaptive neuro-fuzzy inference systems (ANFIS) in order to identify the size and position of the damage in the runner and estimate the turbine operating conditions. The results demonstrated the effectiveness of this approach and provide satisfactory accuracy even when the input data are corrupted with certain level of noise.

  18. Fall Detection Using Smartphone Audio Features.

    PubMed

    Cheffena, Michael

    2016-07-01

    An automated fall detection system based on smartphone audio features is developed. The spectrogram, mel frequency cepstral coefficents (MFCCs), linear predictive coding (LPC), and matching pursuit (MP) features of different fall and no-fall sound events are extracted from experimental data. Based on the extracted audio features, four different machine learning classifiers: k-nearest neighbor classifier (k-NN), support vector machine (SVM), least squares method (LSM), and artificial neural network (ANN) are investigated for distinguishing between fall and no-fall events. For each audio feature, the performance of each classifier in terms of sensitivity, specificity, accuracy, and computational complexity is evaluated. The best performance is achieved using spectrogram features with ANN classifier with sensitivity, specificity, and accuracy all above 98%. The classifier also has acceptable computational requirement for training and testing. The system is applicable in home environments where the phone is placed in the vicinity of the user.

  19. Offshore platform sourced pollution monitoring using space-borne fully polarimetric C and X band synthetic aperture radar.

    PubMed

    Singha, Suman; Ressel, Rudolf

    2016-11-15

    Use of polarimetric SAR data for offshore pollution monitoring is relatively new and shows great potential for operational offshore platform monitoring. This paper describes the development of an automated oil spill detection chain for operational purposes based on C-band (RADARSAT-2) and X-band (TerraSAR-X) fully polarimetric images, wherein we use polarimetric features to characterize oil spills and look-alikes. Numbers of near coincident TerraSAR-X and RADARSAT-2 images have been acquired over offshore platforms. Ten polarimetric feature parameters were extracted from different types of oil and 'look-alike' spots and divided into training and validation dataset. Extracted features were then used to develop a pixel based Artificial Neural Network classifier. Mutual information contents among extracted features were assessed and feature parameters were ranked according to their ability to discriminate between oil spill and look-alike spots. Polarimetric features such as Scattering Diversity, Surface Scattering Fraction and Span proved to be most suitable for operational services. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-12-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  1. Feature generation using genetic programming with application to fault classification.

    PubMed

    Guo, Hong; Jack, Lindsay B; Nandi, Asoke K

    2005-02-01

    One of the major challenges in pattern recognition problems is the feature extraction process which derives new features from existing features, or directly from raw data in order to reduce the cost of computation during the classification process, while improving classifier efficiency. Most current feature extraction techniques transform the original pattern vector into a new vector with increased discrimination capability but lower dimensionality. This is conducted within a predefined feature space, and thus, has limited searching power. Genetic programming (GP) can generate new features from the original dataset without prior knowledge of the probabilistic distribution. In this paper, a GP-based approach is developed for feature extraction from raw vibration data recorded from a rotating machine with six different conditions. The created features are then used as the inputs to a neural classifier for the identification of six bearing conditions. Experimental results demonstrate the ability of GP to discover autimatically the different bearing conditions using features expressed in the form of nonlinear functions. Furthermore, four sets of results--using GP extracted features with artificial neural networks (ANN) and support vector machines (SVM), as well as traditional features with ANN and SVM--have been obtained. This GP-based approach is used for bearing fault classification for the first time and exhibits superior searching power over other techniques. Additionaly, it significantly reduces the time for computation compared with genetic algorithm (GA), therefore, makes a more practical realization of the solution.

  2. Built-up Areas Extraction in High Resolution SAR Imagery based on the method of Multiple Feature Weighted Fusion

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.

    2015-06-01

    Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.

  3. A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks

    PubMed Central

    Wang, Changjian; Liu, Xiaohui; Jin, Shiyao

    2018-01-01

    Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neural networks. The traditional methods use the artificial image features to complete the task without large amounts of labeled data. Meanwhile, the methods based on deep neural networks can extract the image features effectively without the artificial design, but lots of training data are required. Combined with the advantages of them, this paper presents a composite model of wound segmentation. The model uses the skin with wound detection algorithm we designed in the paper to highlight image features. Then, the preprocessed images are segmented by deep neural networks. And semantic corrections are applied to the segmentation results at last. The model shows a good performance in our experiment. PMID:29955227

  4. Texture Feature Extraction and Classification for Iris Diagnosis

    NASA Astrophysics Data System (ADS)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  5. A Genetic-Based Feature Selection Approach in the Identification of Left/Right Hand Motor Imagery for a Brain-Computer Interface

    PubMed Central

    Yaacoub, Charles; Mhanna, Georges; Rihana, Sandy

    2017-01-01

    Electroencephalography is a non-invasive measure of the brain electrical activity generated by millions of neurons. Feature extraction in electroencephalography analysis is a core issue that may lead to accurate brain mental state classification. This paper presents a new feature selection method that improves left/right hand movement identification of a motor imagery brain-computer interface, based on genetic algorithms and artificial neural networks used as classifiers. Raw electroencephalography signals are first preprocessed using appropriate filtering. Feature extraction is carried out afterwards, based on spectral and temporal signal components, and thus a feature vector is constructed. As various features might be inaccurate and mislead the classifier, thus degrading the overall system performance, the proposed approach identifies a subset of features from a large feature space, such that the classifier error rate is reduced. Experimental results show that the proposed method is able to reduce the number of features to as low as 0.5% (i.e., the number of ignored features can reach 99.5%) while improving the accuracy, sensitivity, specificity, and precision of the classifier. PMID:28124985

  6. A Genetic-Based Feature Selection Approach in the Identification of Left/Right Hand Motor Imagery for a Brain-Computer Interface.

    PubMed

    Yaacoub, Charles; Mhanna, Georges; Rihana, Sandy

    2017-01-23

    Electroencephalography is a non-invasive measure of the brain electrical activity generated by millions of neurons. Feature extraction in electroencephalography analysis is a core issue that may lead to accurate brain mental state classification. This paper presents a new feature selection method that improves left/right hand movement identification of a motor imagery brain-computer interface, based on genetic algorithms and artificial neural networks used as classifiers. Raw electroencephalography signals are first preprocessed using appropriate filtering. Feature extraction is carried out afterwards, based on spectral and temporal signal components, and thus a feature vector is constructed. As various features might be inaccurate and mislead the classifier, thus degrading the overall system performance, the proposed approach identifies a subset of features from a large feature space, such that the classifier error rate is reduced. Experimental results show that the proposed method is able to reduce the number of features to as low as 0.5% (i.e., the number of ignored features can reach 99.5%) while improving the accuracy, sensitivity, specificity, and precision of the classifier.

  7. Feature extraction for ultrasonic sensor based defect detection in ceramic components

    NASA Astrophysics Data System (ADS)

    Kesharaju, Manasa; Nagarajah, Romesh

    2014-02-01

    High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.

  8. An alternative respiratory sounds classification system utilizing artificial neural networks.

    PubMed

    Oweis, Rami J; Abdulhay, Enas W; Khayal, Amer; Awad, Areen

    2015-01-01

    Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFIS) toolboxes. The methods have been applied to 10 different respiratory sounds for classification. The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  9. Skipping the real world: Classification of PolSAR images without explicit feature extraction

    NASA Astrophysics Data System (ADS)

    Hänsch, Ronny; Hellwich, Olaf

    2018-06-01

    The typical processing chain for pixel-wise classification from PolSAR images starts with an optional preprocessing step (e.g. speckle reduction), continues with extracting features projecting the complex-valued data into the real domain (e.g. by polarimetric decompositions) which are then used as input for a machine-learning based classifier, and ends in an optional postprocessing (e.g. label smoothing). The extracted features are usually hand-crafted as well as preselected and represent (a somewhat arbitrary) projection from the complex to the real domain in order to fit the requirements of standard machine-learning approaches such as Support Vector Machines or Artificial Neural Networks. This paper proposes to adapt the internal node tests of Random Forests to work directly on the complex-valued PolSAR data, which makes any explicit feature extraction obsolete. This approach leads to a classification framework with a significantly decreased computation time and memory footprint since no image features have to be computed and stored beforehand. The experimental results on one fully-polarimetric and one dual-polarimetric dataset show that, despite the simpler approach, accuracy can be maintained (decreased by only less than 2 % for the fully-polarimetric dataset) or even improved (increased by roughly 9 % for the dual-polarimetric dataset).

  10. Application of texture analysis method for mammogram density classification

    NASA Astrophysics Data System (ADS)

    Nithya, R.; Santhi, B.

    2017-07-01

    Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.

  11. Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data

    NASA Astrophysics Data System (ADS)

    Jia, Feng; Lei, Yaguo; Lin, Jing; Zhou, Xin; Lu, Na

    2016-05-01

    Aiming to promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rotating machinery. Among these studies, the methods based on artificial neural networks (ANNs) are commonly used, which employ signal processing techniques for extracting features and further input the features to ANNs for classifying faults. Though these methods did work in intelligent fault diagnosis of rotating machinery, they still have two deficiencies. (1) The features are manually extracted depending on much prior knowledge about signal processing techniques and diagnostic expertise. In addition, these manual features are extracted according to a specific diagnosis issue and probably unsuitable for other issues. (2) The ANNs adopted in these methods have shallow architectures, which limits the capacity of ANNs to learn the complex non-linear relationships in fault diagnosis issues. As a breakthrough in artificial intelligence, deep learning holds the potential to overcome the aforementioned deficiencies. Through deep learning, deep neural networks (DNNs) with deep architectures, instead of shallow ones, could be established to mine the useful information from raw data and approximate complex non-linear functions. Based on DNNs, a novel intelligent method is proposed in this paper to overcome the deficiencies of the aforementioned intelligent diagnosis methods. The effectiveness of the proposed method is validated using datasets from rolling element bearings and planetary gearboxes. These datasets contain massive measured signals involving different health conditions under various operating conditions. The diagnosis results show that the proposed method is able to not only adaptively mine available fault characteristics from the measured signals, but also obtain superior diagnosis accuracy compared with the existing methods.

  12. Feature extraction and identification in distributed optical-fiber vibration sensing system for oil pipeline safety monitoring

    NASA Astrophysics Data System (ADS)

    Wu, Huijuan; Qian, Ya; Zhang, Wei; Tang, Chenghao

    2017-12-01

    High sensitivity of a distributed optical-fiber vibration sensing (DOVS) system based on the phase-sensitivity optical time domain reflectometry (Φ-OTDR) technology also brings in high nuisance alarm rates (NARs) in real applications. In this paper, feature extraction methods of wavelet decomposition (WD) and wavelet packet decomposition (WPD) are comparatively studied for three typical field testing signals, and an artificial neural network (ANN) is built for the event identification. The comparison results prove that the WPD performs a little better than the WD for the DOVS signal analysis and identification in oil pipeline safety monitoring. The identification rate can be improved up to 94.4%, and the nuisance alarm rate can be effectively controlled as low as 5.6% for the identification network with the wavelet packet energy distribution features.

  13. Artificial Neural Network Based Fault Diagnostics of Rolling Element Bearings Using Time-Domain Features

    NASA Astrophysics Data System (ADS)

    Samanta, B.; Al-Balushi, K. R.

    2003-03-01

    A procedure is presented for fault diagnosis of rolling element bearings through artificial neural network (ANN). The characteristic features of time-domain vibration signals of the rotating machinery with normal and defective bearings have been used as inputs to the ANN consisting of input, hidden and output layers. The features are obtained from direct processing of the signal segments using very simple preprocessing. The input layer consists of five nodes, one each for root mean square, variance, skewness, kurtosis and normalised sixth central moment of the time-domain vibration signals. The inputs are normalised in the range of 0.0 and 1.0 except for the skewness which is normalised between -1.0 and 1.0. The output layer consists of two binary nodes indicating the status of the machine—normal or defective bearings. Two hidden layers with different number of neurons have been used. The ANN is trained using backpropagation algorithm with a subset of the experimental data for known machine conditions. The ANN is tested using the remaining set of data. The effects of some preprocessing techniques like high-pass, band-pass filtration, envelope detection (demodulation) and wavelet transform of the vibration signals, prior to feature extraction, are also studied. The results show the effectiveness of the ANN in diagnosis of the machine condition. The proposed procedure requires only a few features extracted from the measured vibration data either directly or with simple preprocessing. The reduced number of inputs leads to faster training requiring far less iterations making the procedure suitable for on-line condition monitoring and diagnostics of machines.

  14. Classification of breast abnormalities using artificial neural network

    NASA Astrophysics Data System (ADS)

    Zaman, Nur Atiqah Kamarul; Rahman, Wan Eny Zarina Wan Abdul; Jumaat, Abdul Kadir; Yasiran, Siti Salmah

    2015-05-01

    Classification is the process of recognition, differentiation and categorizing objects into groups. Breast abnormalities are calcifications which are tumor markers that indicate the presence of cancer in the breast. The aims of this research are to classify the types of breast abnormalities using artificial neural network (ANN) classifier and to evaluate the accuracy performance using receiver operating characteristics (ROC) curve. The methods used in this research are ANN for breast abnormalities classifications and Canny edge detector as a feature extraction method. Previously the ANN classifier provides only the number of benign and malignant cases without providing information for specific cases. However in this research, the type of abnormality for each image can be obtained. The existing MIAS MiniMammographic database classified the mammogram images into three features only namely characteristic of background tissues, class of abnormality and radius of abnormality. However, in this research three other features are added-in. These three features are number of spots, area and shape of abnormalities. Lastly the performance of the ANN classifier is evaluated using ROC curve. It is found that ANN has an accuracy of 97.9% which is considered acceptable.

  15. Spectral feature extraction of EEG signals and pattern recognition during mental tasks of 2-D cursor movements for BCI using SVM and ANN.

    PubMed

    Bascil, M Serdar; Tesneli, Ahmet Y; Temurtas, Feyzullah

    2016-09-01

    Brain computer interface (BCI) is a new communication way between man and machine. It identifies mental task patterns stored in electroencephalogram (EEG). So, it extracts brain electrical activities recorded by EEG and transforms them machine control commands. The main goal of BCI is to make available assistive environmental devices for paralyzed people such as computers and makes their life easier. This study deals with feature extraction and mental task pattern recognition on 2-D cursor control from EEG as offline analysis approach. The hemispherical power density changes are computed and compared on alpha-beta frequency bands with only mental imagination of cursor movements. First of all, power spectral density (PSD) features of EEG signals are extracted and high dimensional data reduced by principle component analysis (PCA) and independent component analysis (ICA) which are statistical algorithms. In the last stage, all features are classified with two types of support vector machine (SVM) which are linear and least squares (LS-SVM) and three different artificial neural network (ANN) structures which are learning vector quantization (LVQ), multilayer neural network (MLNN) and probabilistic neural network (PNN) and mental task patterns are successfully identified via k-fold cross validation technique.

  16. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  17. Clinical Assistant Diagnosis for Electronic Medical Record Based on Convolutional Neural Network.

    PubMed

    Yang, Zhongliang; Huang, Yongfeng; Jiang, Yiran; Sun, Yuxi; Zhang, Yu-Jin; Luo, Pengcheng

    2018-04-20

    Automatically extracting useful information from electronic medical records along with conducting disease diagnoses is a promising task for both clinical decision support(CDS) and neural language processing(NLP). Most of the existing systems are based on artificially constructed knowledge bases, and then auxiliary diagnosis is done by rule matching. In this study, we present a clinical intelligent decision approach based on Convolutional Neural Networks(CNN), which can automatically extract high-level semantic information of electronic medical records and then perform automatic diagnosis without artificial construction of rules or knowledge bases. We use collected 18,590 copies of the real-world clinical electronic medical records to train and test the proposed model. Experimental results show that the proposed model can achieve 98.67% accuracy and 96.02% recall, which strongly supports that using convolutional neural network to automatically learn high-level semantic features of electronic medical records and then conduct assist diagnosis is feasible and effective.

  18. THE CHOICE OF OPTIMAL STRUCTURE OF ARTIFICIAL NEURAL NETWORK CLASSIFIER INTENDED FOR CLASSIFICATION OF WELDING FLAWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sikora, R.; Chady, T.; Baniukiewicz, P.

    2010-02-22

    Nondestructive testing and evaluation are under continuous development. Currently researches are concentrated on three main topics: advancement of existing methods, introduction of novel methods and development of artificial intelligent systems for automatic defect recognition (ADR). Automatic defect classification algorithm comprises of two main tasks: creating a defect database and preparing a defect classifier. Here, the database was built using defect features that describe all geometrical and texture properties of the defect. Almost twenty carefully selected features calculated for flaws extracted from real radiograms were used. The radiograms were obtained from shipbuilding industry and they were verified by qualified operator. Twomore » weld defect's classifiers based on artificial neural networks were proposed and compared. First model consisted of one neural network model, where each output neuron corresponded to different defect group. The second model contained five neural networks. Each neural network had one neuron on output and was responsible for detection of defects from one group. In order to evaluate the effectiveness of the neural networks classifiers, the mean square errors were calculated for test radiograms and compared.« less

  19. The Choice of Optimal Structure of Artificial Neural Network Classifier Intended for Classification of Welding Flaws

    NASA Astrophysics Data System (ADS)

    Sikora, R.; Chady, T.; Baniukiewicz, P.; Caryk, M.; Piekarczyk, B.

    2010-02-01

    Nondestructive testing and evaluation are under continuous development. Currently researches are concentrated on three main topics: advancement of existing methods, introduction of novel methods and development of artificial intelligent systems for automatic defect recognition (ADR). Automatic defect classification algorithm comprises of two main tasks: creating a defect database and preparing a defect classifier. Here, the database was built using defect features that describe all geometrical and texture properties of the defect. Almost twenty carefully selected features calculated for flaws extracted from real radiograms were used. The radiograms were obtained from shipbuilding industry and they were verified by qualified operator. Two weld defect's classifiers based on artificial neural networks were proposed and compared. First model consisted of one neural network model, where each output neuron corresponded to different defect group. The second model contained five neural networks. Each neural network had one neuron on output and was responsible for detection of defects from one group. In order to evaluate the effectiveness of the neural networks classifiers, the mean square errors were calculated for test radiograms and compared.

  20. Medical diagnosis of atherosclerosis from Carotid Artery Doppler Signals using principal component analysis (PCA), k-NN based weighting pre-processing and Artificial Immune Recognition System (AIRS).

    PubMed

    Latifoğlu, Fatma; Polat, Kemal; Kara, Sadik; Güneş, Salih

    2008-02-01

    In this study, we proposed a new medical diagnosis system based on principal component analysis (PCA), k-NN based weighting pre-processing, and Artificial Immune Recognition System (AIRS) for diagnosis of atherosclerosis from Carotid Artery Doppler Signals. The suggested system consists of four stages. First, in the feature extraction stage, we have obtained the features related with atherosclerosis disease using Fast Fourier Transformation (FFT) modeling and by calculating of maximum frequency envelope of sonograms. Second, in the dimensionality reduction stage, the 61 features of atherosclerosis disease have been reduced to 4 features using PCA. Third, in the pre-processing stage, we have weighted these 4 features using different values of k in a new weighting scheme based on k-NN based weighting pre-processing. Finally, in the classification stage, AIRS classifier has been used to classify subjects as healthy or having atherosclerosis. Hundred percent of classification accuracy has been obtained by the proposed system using 10-fold cross validation. This success shows that the proposed system is a robust and effective system in diagnosis of atherosclerosis disease.

  1. Emotion Recognition from Chinese Speech for Smart Affective Services Using a Combination of SVM and DBN

    PubMed Central

    Zhu, Lianzhang; Chen, Leiming; Zhao, Dehai

    2017-01-01

    Accurate emotion recognition from speech is important for applications like smart health care, smart entertainment, and other smart services. High accuracy emotion recognition from Chinese speech is challenging due to the complexities of the Chinese language. In this paper, we explore how to improve the accuracy of speech emotion recognition, including speech signal feature extraction and emotion classification methods. Five types of features are extracted from a speech sample: mel frequency cepstrum coefficient (MFCC), pitch, formant, short-term zero-crossing rate and short-term energy. By comparing statistical features with deep features extracted by a Deep Belief Network (DBN), we attempt to find the best features to identify the emotion status for speech. We propose a novel classification method that combines DBN and SVM (support vector machine) instead of using only one of them. In addition, a conjugate gradient method is applied to train DBN in order to speed up the training process. Gender-dependent experiments are conducted using an emotional speech database created by the Chinese Academy of Sciences. The results show that DBN features can reflect emotion status better than artificial features, and our new classification approach achieves an accuracy of 95.8%, which is higher than using either DBN or SVM separately. Results also show that DBN can work very well for small training databases if it is properly designed. PMID:28737705

  2. Mexican sign language recognition using normalized moments and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita

    2014-09-01

    This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.

  3. Gender classification from face images by using local binary pattern and gray-level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Uzbaş, Betül; Arslan, Ahmet

    2018-04-01

    Gender is an important step for human computer interactive processes and identification. Human face image is one of the important sources to determine gender. In the present study, gender classification is performed automatically from facial images. In order to classify gender, we propose a combination of features that have been extracted face, eye and lip regions by using a hybrid method of Local Binary Pattern and Gray-Level Co-Occurrence Matrix. The features have been extracted from automatically obtained face, eye and lip regions. All of the extracted features have been combined and given as input parameters to classification methods (Support Vector Machine, Artificial Neural Networks, Naive Bayes and k-Nearest Neighbor methods) for gender classification. The Nottingham Scan face database that consists of the frontal face images of 100 people (50 male and 50 female) is used for this purpose. As the result of the experimental studies, the highest success rate has been achieved as 98% by using Support Vector Machine. The experimental results illustrate the efficacy of our proposed method.

  4. Classification of cardiac patient states using artificial neural networks

    PubMed Central

    Kannathal, N; Acharya, U Rajendra; Lim, Choo Min; Sadasivan, PK; Krishnan, SM

    2003-01-01

    Electrocardiogram (ECG) is a nonstationary signal; therefore, the disease indicators may occur at random in the time scale. This may require the patient be kept under observation for long intervals in the intensive care unit of hospitals for accurate diagnosis. The present study examined the classification of the states of patients with certain diseases in the intensive care unit using their ECG and an Artificial Neural Networks (ANN) classification system. The states were classified into normal, abnormal and life threatening. Seven significant features extracted from the ECG were fed as input parameters to the ANN for classification. Three neural network techniques, namely, back propagation, self-organizing maps and radial basis functions, were used for classification of the patient states. The ANN classifier in this case was observed to be correct in approximately 99% of the test cases. This result was further improved by taking 13 features of the ECG as input for the ANN classifier. PMID:19649222

  5. Medical image diagnoses by artificial neural networks with image correlation, wavelet transform, simulated annealing

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.

    1993-09-01

    Classical artificial neural networks (ANN) and neurocomputing are reviewed for implementing a real time medical image diagnosis. An algorithm known as the self-reference matched filter that emulates the spatio-temporal integration ability of the human visual system might be utilized for multi-frame processing of medical imaging data. A Cauchy machine, implementing a fast simulated annealing schedule, can determine the degree of abnormality by the degree of orthogonality between the patient imagery and the class of features of healthy persons. An automatic inspection process based on multiple modality image sequences is simulated by incorporating the following new developments: (1) 1-D space-filling Peano curves to preserve the 2-D neighborhood pixels' relationship; (2) fast simulated Cauchy annealing for the global optimization of self-feature extraction; and (3) a mini-max energy function for the intra-inter cluster-segregation respectively useful for top-down ANN designs.

  6. Maximum entropy methods for extracting the learned features of deep neural networks.

    PubMed

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.

  7. Automatic recognition of emotions from facial expressions

    NASA Astrophysics Data System (ADS)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  8. Assessing the performance of multiple spectral-spatial features of a hyperspectral image for classification of urban land cover classes using support vector machines and artificial neural network

    NASA Astrophysics Data System (ADS)

    Pullanagari, Reddy; Kereszturi, Gábor; Yule, Ian J.; Ghamisi, Pedram

    2017-04-01

    Accurate and spatially detailed mapping of complex urban environments is essential for land managers. Classifying high spectral and spatial resolution hyperspectral images is a challenging task because of its data abundance and computational complexity. Approaches with a combination of spectral and spatial information in a single classification framework have attracted special attention because of their potential to improve the classification accuracy. We extracted multiple features from spectral and spatial domains of hyperspectral images and evaluated them with two supervised classification algorithms; support vector machines (SVM) and an artificial neural network. The spatial features considered are produced by a gray level co-occurrence matrix and extended multiattribute profiles. All of these features were stacked, and the most informative features were selected using a genetic algorithm-based SVM. After selecting the most informative features, the classification model was integrated with a segmentation map derived using a hidden Markov random field. We tested the proposed method on a real application of a hyperspectral image acquired from AisaFENIX and on widely used hyperspectral images. From the results, it can be concluded that the proposed framework significantly improves the results with different spectral and spatial resolutions over different instrumentation.

  9. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    NASA Astrophysics Data System (ADS)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  10. Diagnosis of Diabetes Mellitus by Extraction of Morphological Features of Red Blood Cells Using an Artificial Neural Network.

    PubMed

    Palanisamy, Vinupritha; Mariamichael, Anburajan

    2016-10-01

    Background and Aim: Diabetes mellitus is a metabolic disorder characterized by varying hyperglycemias either due to insufficient secretion of insulin by the pancreas or improper utilization of glucose. The study was aimed to investigate the association of morphological features of erythrocytes among normal and diabetic subjects and its gender-based changes and thereby to develop a computer aided tool to diagnose diabetes using features extracted from RBC. Materials and Methods: The study involved 138 normal and 144 diabetic subjects. The blood was drawn from the subjects and the blood smear prepared was digitized using Zeiss fluorescent microscope. The digitized images were pre-processed and texture segmentation was performed to extract the various morphological features. The Pearson correlation test was performed and subsequently, classification of subjects as normal and diabetes was carried out by a neural network classifier based on the features that demonstrated significance at the level of P <0.05. Result: The proposed system demonstrated an overall accuracy, sensitivity, specificity, positive predictive value and negative predictive value of 93.3, 93.71, 92.8, 93.1 and 93.5% respectively. Conclusion: The morphological features exhibited a statistically significant difference (P<0.01) between the normal and diabetic cells, suggesting that it could be helpful in the diagnosis of Diabetes mellitus using a computer aided system. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Using different classification models in wheat grading utilizing visual features

    NASA Astrophysics Data System (ADS)

    Basati, Zahra; Rasekh, Mansour; Abbaspour-Gilandeh, Yousef

    2018-04-01

    Wheat is one of the most important strategic crops in Iran and in the world. The major component that distinguishes wheat from other grains is the gluten section. In Iran, sunn pest is one of the most important factors influencing the characteristics of wheat gluten and in removing it from a balanced state. The existence of bug-damaged grains in wheat will reduce the quality and price of the product. In addition, damaged grains reduce the enrichment of wheat and the quality of bread products. In this study, after preprocessing and segmentation of images, 25 features including 9 colour features, 10 morphological features, and 6 textual statistical features were extracted so as to classify healthy and bug-damaged wheat grains of Azar cultivar of four levels of moisture content (9, 11.5, 14 and 16.5% w.b.) and two lighting colours (yellow light, the composition of yellow and white lights). Using feature selection methods in the WEKA software and the CfsSubsetEval evaluator, 11 features were chosen as inputs of artificial neural network, decision tree and discriment analysis classifiers. The results showed that the decision tree with the J.48 algorithm had the highest classification accuracy of 90.20%. This was followed by artificial neural network classifier with the topology of 11-19-2 and discrimient analysis classifier at 87.46 and 81.81%, respectively

  13. Generating description with multi-feature fusion and saliency maps of image

    NASA Astrophysics Data System (ADS)

    Liu, Lisha; Ding, Yuxuan; Tian, Chunna; Yuan, Bo

    2018-04-01

    Generating description for an image can be regard as visual understanding. It is across artificial intelligence, machine learning, natural language processing and many other areas. In this paper, we present a model that generates description for images based on RNN (recurrent neural network) with object attention and multi-feature of images. The deep recurrent neural networks have excellent performance in machine translation, so we use it to generate natural sentence description for images. The proposed method uses single CNN (convolution neural network) that is trained on ImageNet to extract image features. But we think it can not adequately contain the content in images, it may only focus on the object area of image. So we add scene information to image feature using CNN which is trained on Places205. Experiments show that model with multi-feature extracted by two CNNs perform better than which with a single feature. In addition, we make saliency weights on images to emphasize the salient objects in images. We evaluate our model on MSCOCO based on public metrics, and the results show that our model performs better than several state-of-the-art methods.

  14. Developing a multimodal biometric authentication system using soft computing methods.

    PubMed

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  15. Smoke detection using GLCM, wavelet, and motion

    NASA Astrophysics Data System (ADS)

    Srisuwan, Teerasak; Ruchanurucks, Miti

    2014-01-01

    This paper presents a supervised smoke detection method that uses local and global features. This framework integrates and extends notions of many previous works to generate a new comprehensive method. First chrominance detection is used to screen areas that are suspected to be smoke. For these areas, local features are then extracted. The features are among homogeneity of GLCM and energy of wavelet. Then, global feature of motion of the smoke-color areas are extracted using a space-time analysis scheme. Finally these features are used to train an artificial intelligent. Here we use neural network, experiment compares importance of each feature. Hence, we can really know which features among those used by many previous works are really useful. The proposed method outperforms many of the current methods in the sense of correctness, and it does so in a reasonable computation time. It even has less limitation than conventional smoke sensors when used in open space. Best method for the experimental results is to use all the mentioned features as expected, to insure which is the best experiment result can be achieved. The achieved with high accuracy of result expected output is high value of true positive and low value of false positive. And show that our algorithm has good robustness for smoke detection.

  16. Tropical Timber Identification using Backpropagation Neural Network

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Andayani, U.; Fatihah, N.; Hakim, L.; Fahmi, F.

    2017-01-01

    Each and every type of wood has different characteristics. Identifying the type of wood properly is important, especially for industries that need to know the type of timber specifically. However, it requires expertise in identifying the type of wood and only limited experts available. In addition, the manual identification even by experts is rather inefficient because it requires a lot of time and possibility of human errors. To overcome these problems, a digital image based method to identify the type of timber automatically is needed. In this study, backpropagation neural network is used as artificial intelligence component. Several stages were developed: a microscope image acquisition, pre-processing, feature extraction using gray level co-occurrence matrix and normalization of data extraction using decimal scaling features. The results showed that the proposed method was able to identify the timber with an accuracy of 94%.

  17. Classification of vocal aging using parameters extracted from the glottal signal.

    PubMed

    Forero Mendoza, Leonardo A; Cataldo, Edson; Vellasco, Marley M B R; Silva, Marco A; Apolinário, José A

    2014-09-01

    This article proposes and evaluates a method to classify vocal aging using artificial neural network (ANN) and support vector machine (SVM), using the parameters extracted from the speech signal as inputs. For each recorded speech, from a corpus of male and female speakers of different ages, the corresponding glottal signal is obtained using an inverse filtering algorithm. The Mel Frequency Cepstrum Coefficients (MFCC) also extracted from the voice signal and the features extracted from the glottal signal are supplied to an ANN and an SVM with a previous selection. The selection is performed by a wrapper approach of the most relevant parameters. Three groups are considered for the aging-voice classification: young (aged 15-30 years), adult (aged 31-60 years), and senior (aged 61-90 years). The results are compared using different possibilities: with only the parameters extracted from the glottal signal, with only the MFCC, and with a combination of both. The results demonstrate that the best classification rate is obtained using the glottal signal features, which is a novel result and the main contribution of this article. Copyright © 2014 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  18. Clinical state assessment in bipolar patients by means of HRV features obtained with a sensorized T-shirt.

    PubMed

    Mariani, Sara; Migliorini, Matteo; Tacchino, Giulia; Gentili, Claudio; Bertschy, Gilles; Werner, Sandra; Bianchi, Anna M

    2012-01-01

    The aim of this study is to identify parameters extracted from the Heart Rate Variability (HRV) signal that correlate to the clinical state in patients affected by bipolar disorder. 25 ECG and activity recordings from 12 patients were obtained by means of a sensorized T-shirt and the clinical state of the subjects was assessed by a psychiatrist. Features in the time and frequency domain were extracted from each signal. HRV features were also used to automatically compute the sleep profile of each subject by means of an Artificial Neural Network, trained on a control group of healthy subjects. From the hypnograms, sleep-specific parameters were computed. All the parameters were compared with those computed on the control group, in order to highlight significant differences in their values during different stages of the pathology. The analysis was performed by grouping the subjects first on the basis of the depression-mania level and then on the basis of the anxiety level.

  19. Classification of underground pipe scanned images using feature extraction and neuro-fuzzy algorithm.

    PubMed

    Sinha, S K; Karray, F

    2002-01-01

    Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.

  20. Influence of quality of images recorded in far infrared on pattern recognition based on neural networks and Eigenfaces algorithm

    NASA Astrophysics Data System (ADS)

    Jelen, Lukasz; Kobel, Joanna; Podbielska, Halina

    2003-11-01

    This paper discusses the possibility of exploiting of the tennovision registration and artificial neural networks for facial recognition systems. A biometric system that is able to identify people from thermograms is presented. To identify a person we used the Eigenfaces algorithm. For the face detection in the picture the backpropagation neural network was designed. For this purpose thermograms of 10 people in various external conditions were studies. The Eigenfaces algorithm calculated an average face and then the set of characteristic features for each studied person was produced. The neural network has to detect the face in the image before it actually can be identified. We used five hidden layers for that purpose. It was shown that the errors in recognition depend on the feature extraction, for low quality pictures the error was so high as 30%. However, for pictures with a good feature extraction the results of proper identification higher then 90%, were obtained.

  1. Automated detection of pulmonary nodules in CT images with support vector machines

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Liu, Wanyu; Sun, Xiaoming

    2008-10-01

    Many methods have been proposed to avoid radiologists fail to diagnose small pulmonary nodules. Recently, support vector machines (SVMs) had received an increasing attention for pattern recognition. In this paper, we present a computerized system aimed at pulmonary nodules detection; it identifies the lung field, extracts a set of candidate regions with a high sensitivity ratio and then classifies candidates by the use of SVMs. The Computer Aided Diagnosis (CAD) system presented in this paper supports the diagnosis of pulmonary nodules from Computed Tomography (CT) images as inflammation, tuberculoma, granuloma..sclerosing hemangioma, and malignant tumor. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of SVMs classifiers. The achieved classification performance was 100%, 92.75% and 90.23% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  2. Applications of artificial intelligence to digital photogrammetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kretsch, J.L.

    1988-01-01

    The aim of this research was to explore the application of expert systems to digital photogrammetry, specifically to photogrammetric triangulation, feature extraction, and photogrammetric problem solving. In 1987, prototype expert systems were developed for doing system startup, interior orientation, and relative orientation in the mensuration stage. The system explored means of performing diagnostics during the process. In the area of feature extraction, the relationship of metric uncertainty to symbolic uncertainty was the topic of research. Error propagation through the Dempster-Shafer formalism for representing evidence was performed in order to find the variance in the calculated belief values due to errorsmore » in measurements made together the initial evidence needed to being labeling of observed image features with features in an object model. In photogrammetric problem solving, an expert system is under continuous development which seeks to solve photogrammetric problems using mathematical reasoning. The key to the approach used is the representation of knowledge directly in the form of equations, rather than in the form of if-then rules. Then each variable in the equations is treated as a goal to be solved.« less

  3. Knowledge extraction from evolving spiking neural networks with rank order population coding.

    PubMed

    Soltic, Snjezana; Kasabov, Nikola

    2010-12-01

    This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.

  4. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  5. Classification of focal liver lesions on ultrasound images by extracting hybrid textural features and using an artificial neural network.

    PubMed

    Hwang, Yoo Na; Lee, Ju Hwan; Kim, Ga Young; Jiang, Yuan Yuan; Kim, Sung Min

    2015-01-01

    This paper focuses on the improvement of the diagnostic accuracy of focal liver lesions by quantifying the key features of cysts, hemangiomas, and malignant lesions on ultrasound images. The focal liver lesions were divided into 29 cysts, 37 hemangiomas, and 33 malignancies. A total of 42 hybrid textural features that composed of 5 first order statistics, 18 gray level co-occurrence matrices, 18 Law's, and echogenicity were extracted. A total of 29 key features that were selected by principal component analysis were used as a set of inputs for a feed-forward neural network. For each lesion, the performance of the diagnosis was evaluated by using the positive predictive value, negative predictive value, sensitivity, specificity, and accuracy. The results of the experiment indicate that the proposed method exhibits great performance, a high diagnosis accuracy of over 96% among all focal liver lesion groups (cyst vs. hemangioma, cyst vs. malignant, and hemangioma vs. malignant) on ultrasound images. The accuracy was slightly increased when echogenicity was included in the optimal feature set. These results indicate that it is possible for the proposed method to be applied clinically.

  6. A Novel Extraction Approach of Extrinsic and Intrinsic Parameters of InGaAs/GaN pHEMTs

    DTIC Science & Technology

    2015-07-01

    presented, for the first time, artificial bee colony algorithm is applied to the global-optimization based parameter extraction and a novel intrinsic...conservation of the gate charge is well satisfied which further validates this novel extraction method. Index Terms —InGaAs/GaN pHEMTs, artificial bee ...increase the uniqueness of the extraction. Artificial bee colony (ABC) algorithm is adopted as the optimizer due to its excellent ability to escape

  7. An artificial neural networks approach for assessment treatment response in oncological patients using PET/CT images.

    PubMed

    Nogueira, Mariana A; Abreu, Pedro H; Martins, Pedro; Machado, Penousal; Duarte, Hugo; Santos, João

    2017-02-13

    Positron Emission Tomography - Computed Tomography (PET/CT) imaging is the basis for the evaluation of response-to-treatment of several oncological diseases. In practice, such evaluation is manually performed by specialists, which is rather complex and time-consuming. Evaluation measures have been proposed, but with questionable reliability. The usage of before and after-treatment image descriptors of the lesions for treatment response evaluation is still a territory to be explored. In this project, Artificial Neural Network approaches were implemented to automatically assess treatment response of patients suffering from neuroendocrine tumors and Hodgkyn lymphoma, based on image features extracted from PET/CT. The results show that the considered set of features allows for the achievement of very high classification performances, especially when data is properly balanced. After synthetic data generation and PCA-based dimensionality reduction to only two components, LVQNN assured classification accuracies of 100%, 100%, 96.3% and 100% regarding the 4 response-to-treatment classes.

  8. Research to improve the accuracy of determining the stroke volume of an artificial ventricle using the wavelet transform

    NASA Astrophysics Data System (ADS)

    Grad, Leszek; Murawski, Krzysztof; Sulej, Wojciech

    2017-08-01

    In the article we presented results obtained during research, which are the continuation of work on the use of artificial neural networks to determine the relationship between the view of the membrane and the stroke volume of the blood chamber of the mechanical prosthetic heart. The purpose of the research was to increase the accuracy of determining the blood chamber volume. Therefore, the study was focused on the technique of the features that the image extraction gives. During research we used the wavelet transform. The achieved results were compared to the results obtained by other previous methods. Tests were conducted on the same mechanical prosthetic heart model used in previous experiments.

  9. Prediction of the Wall Factor of Arbitrary Particle Settling through Various Fluid Media in a Cylindrical Tube Using Artificial Intelligence

    PubMed Central

    Li, Mingzhong; Xue, Jianquan; Li, Yanchao; Tang, Shukai

    2014-01-01

    Considering the influence of particle shape and the rheological properties of fluid, two artificial intelligence methods (Artificial Neural Network and Support Vector Machine) were used to predict the wall factor which is widely introduced to deduce the net hydrodynamic drag force of confining boundaries on settling particles. 513 data points were culled from the experimental data of previous studies, which were divided into training set and test set. Particles with various shapes were divided into three kinds: sphere, cylinder, and rectangular prism; feature parameters of each kind of particle were extracted; prediction models of sphere and cylinder using artificial neural network were established. Due to the little number of rectangular prism sample, support vector machine was used to predict the wall factor, which is more suitable for addressing the problem of small samples. The characteristic dimension was presented to describe the shape and size of the diverse particles and a comprehensive prediction model of particles with arbitrary shapes was established to cover all types of conditions. Comparisons were conducted between the predicted values and the experimental results. PMID:24772024

  10. Prediction of the wall factor of arbitrary particle settling through various fluid media in a cylindrical tube using artificial intelligence.

    PubMed

    Li, Mingzhong; Zhang, Guodong; Xue, Jianquan; Li, Yanchao; Tang, Shukai

    2014-01-01

    Considering the influence of particle shape and the rheological properties of fluid, two artificial intelligence methods (Artificial Neural Network and Support Vector Machine) were used to predict the wall factor which is widely introduced to deduce the net hydrodynamic drag force of confining boundaries on settling particles. 513 data points were culled from the experimental data of previous studies, which were divided into training set and test set. Particles with various shapes were divided into three kinds: sphere, cylinder, and rectangular prism; feature parameters of each kind of particle were extracted; prediction models of sphere and cylinder using artificial neural network were established. Due to the little number of rectangular prism sample, support vector machine was used to predict the wall factor, which is more suitable for addressing the problem of small samples. The characteristic dimension was presented to describe the shape and size of the diverse particles and a comprehensive prediction model of particles with arbitrary shapes was established to cover all types of conditions. Comparisons were conducted between the predicted values and the experimental results.

  11. Identification of input variables for feature based artificial neural networks-saccade detection in EOG recordings.

    PubMed

    Tigges, P; Kathmann, N; Engel, R R

    1997-07-01

    Though artificial neural networks (ANN) are excellent tools for pattern recognition problems when signal to noise ratio is low, the identification of decision relevant features for ANN input data is still a crucial issue. The experience of the ANN designer and the existing knowledge and understanding of the problem seem to be the only links for a specific construction. In the present study a backpropagation ANN based on modified raw data inputs showed encouraging results. Investigating the specific influences of prototypical input patterns on a specially designed ANN led to a new sparse and efficient input data presentation. This data coding obtained by a semiautomatic procedure combining existing expert knowledge and the internal representation structures of the raw data based ANN yielded a list of feature vectors, each representing the relevant information for saccade identification. The feature based ANN produced a reduction of the error rate of nearly 40% compared with the raw data ANN. An overall correct classification of 92% of so far unknown data was realized. The proposed method of extracting internal ANN knowledge for the production of a better input data representation is not restricted to EOG recordings, and could be used in various fields of signal analysis.

  12. Neural net target-tracking system using structured laser patterns

    NASA Astrophysics Data System (ADS)

    Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun

    1996-06-01

    In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.

  13. Modelling and representation issues in automated feature extraction from aerial and satellite images

    NASA Astrophysics Data System (ADS)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  14. Fully Connected Cascade Artificial Neural Network Architecture for Attention Deficit Hyperactivity Disorder Classification From Functional Magnetic Resonance Imaging Data.

    PubMed

    Deshpande, Gopikrishna; Wang, Peng; Rangaprakash, D; Wilamowski, Bogdan

    2015-12-01

    Automated recognition and classification of brain diseases are of tremendous value to society. Attention deficit hyperactivity disorder (ADHD) is a diverse spectrum disorder whose diagnosis is based on behavior and hence will benefit from classification utilizing objective neuroimaging measures. Toward this end, an international competition was conducted for classifying ADHD using functional magnetic resonance imaging data acquired from multiple sites worldwide. Here, we consider the data from this competition as an example to illustrate the utility of fully connected cascade (FCC) artificial neural network (ANN) architecture for performing classification. We employed various directional and nondirectional brain connectivity-based methods to extract discriminative features which gave better classification accuracy compared to raw data. Our accuracy for distinguishing ADHD from healthy subjects was close to 90% and between the ADHD subtypes was close to 95%. Further, we show that, if properly used, FCC ANN performs very well compared to other classifiers such as support vector machines in terms of accuracy, irrespective of the feature used. Finally, the most discriminative connectivity features provided insights about the pathophysiology of ADHD and showed reduced and altered connectivity involving the left orbitofrontal cortex and various cerebellar regions in ADHD.

  15. Train axle bearing fault detection using a feature selection scheme based multi-scale morphological filter

    NASA Astrophysics Data System (ADS)

    Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin

    2018-02-01

    This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.

  16. Frequency-domain preprocessing and directional correlation-based feature extraction for classification of the buried objects using GPR B-scan data

    NASA Astrophysics Data System (ADS)

    Bahadirlar, Yildirim; Kaplan, Gulay B.

    2004-09-01

    A new preprocessing and feature extracting approach for classification of non-metallic buried objects are aimed using GPR B-scan data. A frequency-domain adaptive filter without a reference channel effectively removes the background signal resulting mostly from the discontinuity on the air-to-ground path of the electromagnetic waves. The filter only needs average of the first five A-scans as the reference signal for this elimination, and also serves for masking of the B-scan in the frequency-domain. A preprocessed GPR data with significantly suppressed clutter is then obtained by precisely positioning the Hanning window in the frequency-domain. A directional correlation function defined over a B-scan frame gives distinctive curves of buried objects. The main axis of directional correlation, on which the pivotal correlating pixels and short lines of pixels being correlated are considered, makes an angle to the scanning direction of the B-scan. This form of correlation is applied to the frame from the left-hand and the right-hand side and two over-plotted curves are obtained. Nine measures as features emphasizing directional signatures are extracted from these curves. Nine-element feature vectors are applied to the two-layer Artificial Neural Network and preliminary results over test set are promising to continue to comprehensive training and testing processes.

  17. Deep convolutional neural networks for classifying GPR B-scans

    NASA Astrophysics Data System (ADS)

    Besaw, Lance E.; Stimac, Philip J.

    2015-05-01

    Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.

  18. A neural network approach to lung nodule segmentation

    NASA Astrophysics Data System (ADS)

    Hu, Yaoxiu; Menon, Prahlad G.

    2016-03-01

    Computed tomography (CT) imaging is a sensitive and specific lung cancer screening tool for the high-risk population and shown to be promising for detection of lung cancer. This study proposes an automatic methodology for detecting and segmenting lung nodules from CT images. The proposed methods begin with thorax segmentation, lung extraction and reconstruction of the original shape of the parenchyma using morphology operations. Next, a multi-scale hessian-based vesselness filter is applied to extract lung vasculature in lung. The lung vasculature mask is subtracted from the lung region segmentation mask to extract 3D regions representing candidate pulmonary nodules. Finally, the remaining structures are classified as nodules through shape and intensity features which are together used to train an artificial neural network. Up to 75% sensitivity and 98% specificity was achieved for detection of lung nodules in our testing dataset, with an overall accuracy of 97.62%+/-0.72% using 11 selected features as input to the neural network classifier, based on 4-fold cross-validation studies. Receiver operator characteristics for identifying nodules revealed an area under curve of 0.9476.

  19. Wireless AE Event and Environmental Monitoring for Wind Turbine Blades at Low Sampling Rates

    NASA Astrophysics Data System (ADS)

    Bouzid, Omar M.; Tian, Gui Y.; Cumanan, K.; Neasham, J.

    Integration of acoustic wireless technology in structural health monitoring (SHM) applications introduces new challenges due to requirements of high sampling rates, additional communication bandwidth, memory space, and power resources. In order to circumvent these challenges, this chapter proposes a novel solution through building a wireless SHM technique in conjunction with acoustic emission (AE) with field deployment on the structure of a wind turbine. This solution requires a low sampling rate which is lower than the Nyquist rate. In addition, features extracted from aliased AE signals instead of reconstructing the original signals on-board the wireless nodes are exploited to monitor AE events, such as wind, rain, strong hail, and bird strike in different environmental conditions in conjunction with artificial AE sources. Time feature extraction algorithm, in addition to the principal component analysis (PCA) method, is used to extract and classify the relevant information, which in turn is used to classify or recognise a testing condition that is represented by the response signals. This proposed novel technique yields a significant data reduction during the monitoring process of wind turbine blades.

  20. Extraction of texture features with a multiresolution neural network

    NASA Astrophysics Data System (ADS)

    Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.

    1992-09-01

    Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.

  1. Deep Learning in Label-free Cell Classification

    PubMed Central

    Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; Blaby, Ian K.; Huang, Allen; Niazi, Kayvan Reza; Jalali, Bahram

    2016-01-01

    Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells. PMID:26975219

  2. Deep Learning in Label-free Cell Classification

    NASA Astrophysics Data System (ADS)

    Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; Blaby, Ian K.; Huang, Allen; Niazi, Kayvan Reza; Jalali, Bahram

    2016-03-01

    Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.

  3. An integrated multi-sensor fusion-based deep feature learning approach for rotating machinery diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Hu, Youmin; Wang, Yan; Wu, Bo; Fan, Jikai; Hu, Zhongxu

    2018-05-01

    The diagnosis of complicated fault severity problems in rotating machinery systems is an important issue that affects the productivity and quality of manufacturing processes and industrial applications. However, it usually suffers from several deficiencies. (1) A considerable degree of prior knowledge and expertise is required to not only extract and select specific features from raw sensor signals, and but also choose a suitable fusion for sensor information. (2) Traditional artificial neural networks with shallow architectures are usually adopted and they have a limited ability to learn the complex and variable operating conditions. In multi-sensor-based diagnosis applications in particular, massive high-dimensional and high-volume raw sensor signals need to be processed. In this paper, an integrated multi-sensor fusion-based deep feature learning (IMSFDFL) approach is developed to identify the fault severity in rotating machinery processes. First, traditional statistics and energy spectrum features are extracted from multiple sensors with multiple channels and combined. Then, a fused feature vector is constructed from all of the acquisition channels. Further, deep feature learning with stacked auto-encoders is used to obtain the deep features. Finally, the traditional softmax model is applied to identify the fault severity. The effectiveness of the proposed IMSFDFL approach is primarily verified by a one-stage gearbox experimental platform that uses several accelerometers under different operating conditions. This approach can identify fault severity more effectively than the traditional approaches.

  4. Online particle detection with Neural Networks based on topological calorimetry information

    NASA Astrophysics Data System (ADS)

    Ciodaro, T.; Deva, D.; de Seixas, J. M.; Damazio, D.

    2012-06-01

    This paper presents the latest results from the Ringer algorithm, which is based on artificial neural networks for the electron identification at the online filtering system of the ATLAS particle detector, in the context of the LHC experiment at CERN. The algorithm performs topological feature extraction using the ATLAS calorimetry information (energy measurements). The extracted information is presented to a neural network classifier. Studies showed that the Ringer algorithm achieves high detection efficiency, while keeping the false alarm rate low. Optimizations, guided by detailed analysis, reduced the algorithm execution time by 59%. Also, the total memory necessary to store the Ringer algorithm information represents less than 6.2 percent of the total filtering system amount.

  5. Prediction of troponin-T degradation using color image texture features in 10d aged beef longissimus steaks.

    PubMed

    Sun, X; Chen, K J; Berg, E P; Newman, D J; Schwartz, C A; Keller, W L; Maddock Carlin, K R

    2014-02-01

    The objective was to use digital color image texture features to predict troponin-T degradation in beef. Image texture features, including 88 gray level co-occurrence texture features, 81 two-dimension fast Fourier transformation texture features, and 48 Gabor wavelet filter texture features, were extracted from color images of beef strip steaks (longissimus dorsi, n = 102) aged for 10d obtained using a digital camera and additional lighting. Steaks were designated degraded or not-degraded based on troponin-T degradation determined on d 3 and d 10 postmortem by immunoblotting. Statistical analysis (STEPWISE regression model) and artificial neural network (support vector machine model, SVM) methods were designed to classify protein degradation. The d 3 and d 10 STEPWISE models were 94% and 86% accurate, respectively, while the d 3 and d 10 SVM models were 63% and 71%, respectively, in predicting protein degradation in aged meat. STEPWISE and SVM models based on image texture features show potential to predict troponin-T degradation in meat. © 2013.

  6. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-08-16

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  7. Multiple-Primitives Hierarchical Classification of Airborne Laser Scanning Data in Urban Areas

    NASA Astrophysics Data System (ADS)

    Ni, H.; Lin, X. G.; Zhang, J. X.

    2017-09-01

    A hierarchical classification method for Airborne Laser Scanning (ALS) data of urban areas is proposed in this paper. This method is composed of three stages among which three types of primitives are utilized, i.e., smooth surface, rough surface, and individual point. In the first stage, the input ALS data is divided into smooth surfaces and rough surfaces by employing a step-wise point cloud segmentation method. In the second stage, classification based on smooth surfaces and rough surfaces is performed. Points in the smooth surfaces are first classified into ground and buildings based on semantic rules. Next, features of rough surfaces are extracted. Then, points in rough surfaces are classified into vegetation and vehicles based on the derived features and Random Forests (RF). In the third stage, point-based features are extracted for the ground points, and then, an individual point classification procedure is performed to classify the ground points into bare land, artificial ground and greenbelt. Moreover, the shortages of the existing studies are analyzed, and experiments show that the proposed method overcomes these shortages and handles more types of objects.

  8. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    PubMed Central

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  9. A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork.

    PubMed

    Xu, Yi; Chen, Quansheng; Liu, Yan; Sun, Xin; Huang, Qiping; Ouyang, Qin; Zhao, Jiewen

    2018-04-01

    This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control.

  10. A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork

    PubMed Central

    Xu, Yi; Chen, Quansheng; Liu, Yan; Sun, Xin; Huang, Qiping; Ouyang, Qin; Zhao, Jiewen

    2018-01-01

    Abstract This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control. PMID:29805285

  11. [Artificial intelligence in sleep analysis (ARTISANA)--modelling visual processes in sleep classification].

    PubMed

    Schwaibold, M; Schöller, B; Penzel, T; Bolz, A

    2001-05-01

    We describe a novel approach to the problem of automated sleep stage recognition. The ARTISANA algorithm mimics the behaviour of a human expert visually scoring sleep stages (Rechtschaffen and Kales classification). It comprises a number of interacting components that imitate the stepwise approach of the human expert, and artificial intelligence components. On the basis of parameters extracted at 1-s intervals from the signal curves, artificial neural networks recognize the incidence of typical patterns, e.g. delta activity or K complexes. This is followed by a rule interpretation stage that identifies the sleep stage with the aid of a neuro-fuzzy system while taking account of the context. Validation studies based on the records of 8 patients with obstructive sleep apnoea have confirmed the potential of this approach. Further features of the system include the transparency of the decision-taking process, and the flexibility of the option for expanding the system to cover new patterns and criteria.

  12. Real-time ultrasound image classification for spine anesthesia using local directional Hadamard features.

    PubMed

    Pesteie, Mehran; Abolmaesumi, Purang; Ashab, Hussam Al-Deen; Lessoway, Victoria A; Massey, Simon; Gunka, Vit; Rohling, Robert N

    2015-06-01

    Injection therapy is a commonly used solution for back pain management. This procedure typically involves percutaneous insertion of a needle between or around the vertebrae, to deliver anesthetics near nerve bundles. Most frequently, spinal injections are performed either blindly using palpation or under the guidance of fluoroscopy or computed tomography. Recently, due to the drawbacks of the ionizing radiation of such imaging modalities, there has been a growing interest in using ultrasound imaging as an alternative. However, the complex spinal anatomy with different wave-like structures, affected by speckle noise, makes the accurate identification of the appropriate injection plane difficult. The aim of this study was to propose an automated system that can identify the optimal plane for epidural steroid injections and facet joint injections. A multi-scale and multi-directional feature extraction system to provide automated identification of the appropriate plane is proposed. Local Hadamard coefficients are obtained using the sequency-ordered Hadamard transform at multiple scales. Directional features are extracted from local coefficients which correspond to different regions in the ultrasound images. An artificial neural network is trained based on the local directional Hadamard features for classification. The proposed method yields distinctive features for classification which successfully classified 1032 images out of 1090 for epidural steroid injection and 990 images out of 1052 for facet joint injection. In order to validate the proposed method, a leave-one-out cross-validation was performed. The average classification accuracy for leave-one-out validation was 94 % for epidural and 90 % for facet joint targets. Also, the feature extraction time for the proposed method was 20 ms for a native 2D ultrasound image. A real-time machine learning system based on the local directional Hadamard features extracted by the sequency-ordered Hadamard transform for detecting the laminae and facet joints in ultrasound images has been proposed. The system has the potential to assist the anesthesiologists in quickly finding the target plane for epidural steroid injections and facet joint injections.

  13. Fault diagnosis of helical gearbox using acoustic signal and wavelets

    NASA Astrophysics Data System (ADS)

    Pranesh, SK; Abraham, Siju; Sugumaran, V.; Amarnath, M.

    2017-05-01

    The efficient transmission of power in machines is needed and gears are an appropriate choice. Faults in gears result in loss of energy and money. The monitoring and fault diagnosis are done by analysis of the acoustic and vibrational signals which are generally considered to be unwanted by products. This study proposes the usage of machine learning algorithm for condition monitoring of a helical gearbox by using the sound signals produced by the gearbox. Artificial faults were created and subsequently signals were captured by a microphone. An extensive study using different wavelet transformations for feature extraction from the acoustic signals was done, followed by waveletselection and feature selection using J48 decision tree and feature classification was performed using K star algorithm. Classification accuracy of 100% was obtained in the study

  14. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  15. [Surface electromyography signal classification using gray system theory].

    PubMed

    Xie, Hongbo; Ma, Congbin; Wang, Zhizhong; Huang, Hai

    2004-12-01

    A new method based on gray correlation was introduced to improve the identification rate in artificial limb. The electromyography (EMG) signal was first transformed into time-frequency domain by wavelet transform. Singular value decomposition (SVD) was then used to extract feature vector from the wavelet coefficient for pattern recognition. The decision was made according to the maximum gray correlation coefficient. Compared with neural network recognition, this robust method has an almost equivalent recognition rate but much lower computation costs and less training samples.

  16. Swallowing Mechanics Associated With Artificial Airways, Bolus Properties, and Penetration-Aspiration Status in Trauma Patients.

    PubMed

    Dietsch, Angela M; Rowley, Christopher B; Solomon, Nancy Pearl; Pearson, William G

    2017-09-18

    Artificial airway procedures such as intubation and tracheotomy are common in the treatment of traumatic injuries, and bolus modifications may be implemented to help manage swallowing disorders. This study assessed artificial airway status, bolus properties (volume and viscosity), and the occurrence of laryngeal penetration and/or aspiration in relation to mechanical features of swallowing. Coordinates of anatomical landmarks were extracted at minimum and maximum hyolaryngeal excursion from 228 videofluoroscopic swallowing studies representing 69 traumatically injured U.S. military service members with dysphagia. Morphometric canonical variate and regression analyses examined associations between swallowing mechanics and bolus properties based on artificial airway and penetration-aspiration status. Significant differences in swallowing mechanics were detected between extubated versus tracheotomized (D = 1.32, p < .0001), extubated versus decannulated (D = 1.74, p < .0001), and decannulated versus tracheotomized (D = 1.24, p < .0001) groups per post hoc discriminant function analysis. Tracheotomy-in-situ and decannulated subgroups exhibited increased head/neck extension and posterior relocation of the larynx. Swallowing mechanics associated with (a) penetration-aspiration status and (b) bolus properties were moderately related for extubated and decannulated subgroups, but not the tracheotomized subgroup, per morphometric regression analysis. Specific differences in swallowing mechanics associated with artificial airway status and certain bolus properties may guide therapeutic intervention in trauma-based dysphagia.

  17. Optimization of extraction of linarin from Flos chrysanthemi indici by response surface methodology and artificial neural network.

    PubMed

    Pan, Hongye; Zhang, Qing; Cui, Keke; Chen, Guoquan; Liu, Xuesong; Wang, Longhu

    2017-05-01

    The extraction of linarin from Flos chrysanthemi indici by ethanol was investigated. Two modeling techniques, response surface methodology and artificial neural network, were adopted to optimize the process parameters, such as, ethanol concentration, extraction period, extraction frequency, and solvent to material ratio. We showed that both methods provided good predictions, but artificial neural network provided a better and more accurate result. The optimum process parameters include, ethanol concentration of 74%, extraction period of 2 h, extraction three times, solvent to material ratio of 12 mL/g. The experiment yield of linarin was 90.5% that deviated less than 1.6% from that obtained by predicted result. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Broiler weight estimation based on machine vision and artificial neural network.

    PubMed

    Amraei, S; Abdanan Mehdizadeh, S; Salari, S

    2017-04-01

    1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R 2 value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.

  19. Stereo Image Ranging For An Autonomous Robot Vision System

    NASA Astrophysics Data System (ADS)

    Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven

    1985-12-01

    The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.

  20. Combination of radiological and gray level co-occurrence matrix textural features used to distinguish solitary pulmonary nodules by computed tomography.

    PubMed

    Wu, Haifeng; Sun, Tao; Wang, Jingjing; Li, Xia; Wang, Wei; Huo, Da; Lv, Pingxin; He, Wen; Wang, Keyang; Guo, Xiuhua

    2013-08-01

    The objective of this study was to investigate the method of the combination of radiological and textural features for the differentiation of malignant from benign solitary pulmonary nodules by computed tomography. Features including 13 gray level co-occurrence matrix textural features and 12 radiological features were extracted from 2,117 CT slices, which came from 202 (116 malignant and 86 benign) patients. Lasso-type regularization to a nonlinear regression model was applied to select predictive features and a BP artificial neural network was used to build the diagnostic model. Eight radiological and two textural features were obtained after the Lasso-type regularization procedure. Twelve radiological features alone could reach an area under the ROC curve (AUC) of 0.84 in differentiating between malignant and benign lesions. The 10 selected characters improved the AUC to 0.91. The evaluation results showed that the method of selecting radiological and textural features appears to yield more effective in the distinction of malignant from benign solitary pulmonary nodules by computed tomography.

  1. Vegetation extraction from high-resolution satellite imagery using the Normalized Difference Vegetation Index (NDVI)

    NASA Astrophysics Data System (ADS)

    AlShamsi, Meera R.

    2016-10-01

    Over the past years, there has been various urban development all over the UAE. Dubai is one of the cities that experienced rapid growth in both development and population. That growth can have a negative effect on the surrounding environment. Hence, there has been a necessity to protect the environment from these fast pace changes. One of the major impacts this growth can have is on vegetation. As technology is evolving day by day, there is a possibility to monitor changes that are happening on different areas in the world using satellite imagery. The data from these imageries can be utilized to identify vegetation in different areas of an image through a process called vegetation detection. Being able to detect and monitor vegetation is very beneficial for municipal planning and management, and environment authorities. Through this, analysts can monitor vegetation growth in various areas and analyze these changes. By utilizing satellite imagery with the necessary data, different types of vegetation can be studied and analyzed, such as parks, farms, and artificial grass in sports fields. In this paper, vegetation features are detected and extracted through SAFIY system (i.e. the Smart Application for Feature extraction and 3D modeling using high resolution satellite ImagerY) by using high-resolution satellite imagery from DubaiSat-2 and DEIMOS-2 satellites, which provide panchromatic images of 1m resolution and spectral bands (red, green, blue and near infrared) of 4m resolution. SAFIY system is a joint collaboration between MBRSC and DEIMOS Space UK. It uses image-processing algorithms to extract different features (roads, water, vegetation, and buildings) to generate vector maps data. The process to extract green areas (vegetation) utilize spectral information (such as, the red and near infrared bands) from the satellite images. These detected vegetation features will be extracted as vector data in SAFIY system and can be updated and edited by end-users, such as governmental entities and municipalities.

  2. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called ''Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres'', (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the ''Robust design of artificial neural networks methodology'' and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored atmore » synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of {sup 252}Cf, {sup 241}AmBe and {sup 239}PuBe neutron sources measured with a Bonner spheres system.« less

  3. NSDann2BS, a neutron spectrum unfolding code based on neural networks technology and two bonner spheres

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called "Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres", (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the "Robust design of artificial neural networks methodology" and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of 252Cf, 241AmBe and 239PuBe neutron sources measured with a Bonner spheres system.

  4. Driving profile modeling and recognition based on soft computing approach.

    PubMed

    Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya

    2009-04-01

    Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers.

  5. Classification of Weed Species Using Artificial Neural Networks Based on Color Leaf Texture Feature

    NASA Astrophysics Data System (ADS)

    Li, Zhichen; An, Qiu; Ji, Changying

    The potential impact of herbicide utilization compel people to use new method of weed control. Selective herbicide application is optimal method to reduce herbicide usage while maintain weed control. The key of selective herbicide is how to discriminate weed exactly. The HIS color co-occurrence method (CCM) texture analysis techniques was used to extract four texture parameters: Angular second moment (ASM), Entropy(E), Inertia quadrature (IQ), and Inverse difference moment or local homogeneity (IDM).The weed species selected for studying were Arthraxon hispidus, Digitaria sanguinalis, Petunia, Cyperus, Alternanthera Philoxeroides and Corchoropsis psilocarpa. The software of neuroshell2 was used for designing the structure of the neural network, training and test the data. It was found that the 8-40-1 artificial neural network provided the best classification performance and was capable of classification accuracies of 78%.

  6. Searching for the main anti-bacterial components in artificial Calculus bovis using UPLC and microcalorimetry coupled with multi-linear regression analysis.

    PubMed

    Zang, Qing-Ce; Wang, Jia-Bo; Kong, Wei-Jun; Jin, Cheng; Ma, Zhi-Jie; Chen, Jing; Gong, Qian-Feng; Xiao, Xiao-He

    2011-12-01

    The fingerprints of artificial Calculus bovis extracts from different solvents were established by ultra-performance liquid chromatography (UPLC) and the anti-bacterial activities of artificial C. bovis extracts on Staphylococcus aureus (S. aureus) growth were studied by microcalorimetry. The UPLC fingerprints were evaluated using hierarchical clustering analysis. Some quantitative parameters obtained from the thermogenic curves of S. aureus growth affected by artificial C. bovis extracts were analyzed using principal component analysis. The spectrum-effect relationships between UPLC fingerprints and anti-bacterial activities were investigated using multi-linear regression analysis. The results showed that peak 1 (taurocholate sodium), peak 3 (unknown compound), peak 4 (cholic acid), and peak 6 (chenodeoxycholic acid) are more significant than the other peaks with the standard parameter estimate 0.453, -0.166, 0.749, 0.025, respectively. So, compounds cholic acid, taurocholate sodium, and chenodeoxycholic acid might be the major anti-bacterial components in artificial C. bovis. Altogether, this work provides a general model of the combination of UPLC chromatography and anti-bacterial effect to study the spectrum-effect relationships of artificial C. bovis extracts, which can be used to discover the main anti-bacterial components in artificial C. bovis or other Chinese herbal medicines with anti-bacterial effects. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Application of texture analysis method for classification of benign and malignant thyroid nodules in ultrasound images.

    PubMed

    Abbasian Ardakani, Ali; Gharbali, Akbar; Mohammadi, Afshin

    2015-01-01

    The aim of this study was to evaluate computer aided diagnosis (CAD) system with texture analysis (TA) to improve radiologists' accuracy in identification of thyroid nodules as malignant or benign. A total of 70 cases (26 benign and 44 malignant) were analyzed in this study. We extracted up to 270 statistical texture features as a descriptor for each selected region of interests (ROIs) in three normalization schemes (default, 3s and 1%-99%). Then features by the lowest probability of classification error and average correlation coefficients (POE+ACC), and Fisher coefficient (Fisher) eliminated to 10 best and most effective features. These features were analyzed under standard and nonstandard states. For TA of the thyroid nodules, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA) were applied. First Nearest-Neighbour (1-NN) classifier was performed for the features resulting from PCA and LDA. NDA features were classified by artificial neural network (A-NN). Receiver operating characteristic (ROC) curve analysis was used for examining the performance of TA methods. The best results were driven in 1-99% normalization with features extracted by POE+ACC algorithm and analyzed by NDA with the area under the ROC curve ( Az) of 0.9722 which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Our results indicate that TA is a reliable method, can provide useful information help radiologist in detection and classification of benign and malignant thyroid nodules.

  8. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    PubMed

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  9. Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.

    PubMed

    Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil

    2018-01-25

    Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.

  10. Deep Learning in Label-free Cell Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia

    Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individualmore » cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. In conclusion, this system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.« less

  11. Deep Learning in Label-free Cell Classification

    DOE PAGES

    Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; ...

    2016-03-15

    Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individualmore » cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. In conclusion, this system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.« less

  12. A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.

    PubMed

    Xue, Xiaoming; Zhou, Jianzhong

    2017-01-01

    To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Automatic analysis and classification of surface electromyography.

    PubMed

    Abou-Chadi, F E; Nashar, A; Saad, M

    2001-01-01

    In this paper, parametric modeling of surface electromyography (EMG) algorithms that facilitates automatic SEMG feature extraction and artificial neural networks (ANN) are combined for providing an integrated system for the automatic analysis and diagnosis of myopathic disorders. Three paradigms of ANN were investigated: the multilayer backpropagation algorithm, the self-organizing feature map algorithm and a probabilistic neural network model. The performance of the three classifiers was compared with that of the old Fisher linear discriminant (FLD) classifiers. The results have shown that the three ANN models give higher performance. The percentage of correct classification reaches 90%. Poorer diagnostic performance was obtained from the FLD classifier. The system presented here indicates that surface EMG, when properly processed, can be used to provide the physician with a diagnostic assist device.

  14. Northeast Artificial Intelligence Consortium Annual Report 1986. Volume 5. Building an Intelligent Assistant: The Acquisition, Integration, and Maintenance of Complex Distributed Tasks

    DTIC Science & Technology

    1988-06-01

    extraction nets. TerrainMaps: Tools for physical and pseudo-physical molding and growing of features on terrain and thematic maps. 5-13 + ,m , mmmmmm mmmmm...ok 1) the student neem confused, and 2) the teot for wroag-answerstshold is met Recognizing a confused tudent is admittedly a mabjective and imprecise...you know that GRADE in iine 9 is a control variable? Student: Yes 2. Tutor: OIL What i the value of GRADE at anytime during loop execution? Studam

  15. Analysis of the Growth Process of Neural Cells in Culture Environment Using Image Processing Techniques

    NASA Astrophysics Data System (ADS)

    Mirsafianf, Atefeh S.; Isfahani, Shirin N.; Kasaei, Shohreh; Mobasheri, Hamid

    Here we present an approach for processing neural cells images to analyze their growth process in culture environment. We have applied several image processing techniques for: 1- Environmental noise reduction, 2- Neural cells segmentation, 3- Neural cells classification based on their dendrites' growth conditions, and 4- neurons' features Extraction and measurement (e.g., like cell body area, number of dendrites, axon's length, and so on). Due to the large amount of noise in the images, we have used feed forward artificial neural networks to detect edges more precisely.

  16. Determination of chlorine concentration using single temperature modulated semiconductor gas sensor

    NASA Astrophysics Data System (ADS)

    Woźniak, Ł.; Kalinowski, P.; Jasiński, G.; Jasiński, P.

    2016-11-01

    A periodic temperature modulation using sinusoidal heater voltage was applied to a commercial SnO2 semiconductor gas sensor. Resulting resistance response of the sensor was analyzed using a feature extraction method based on Fast Fourier Transformation (FFT). The amplitudes of the higher harmonics of the FFT from the dynamic nonlinear responses of measured gas were further utilized as an input for Artificial Neuron Network (ANN). Determination of the concentration of chlorine was performed. Moreover, this work evaluates the sensor performance upon sinusoidal temperature modulation.

  17. Smart sensor for terminal homing

    NASA Astrophysics Data System (ADS)

    Panda, D.; Aggarwal, R.; Hummel, R.

    1980-01-01

    The practical scene matching problem is considered to present certain complications which must extend classical image processing capabilities. Certain aspects of the scene matching problem which must be addressed by a smart sensor for terminal homing are discussed. First a philosophy for treating the matching problem for the terminal homing scenario is outlined. Then certain aspects of the feature extraction process and symbolic pattern matching are considered. It is thought that in the future general ideas from artificial intelligence will be more useful for terminal homing requirements of fast scene recognition and pattern matching.

  18. Robust hepatic vessel segmentation using multi deep convolution network

    NASA Astrophysics Data System (ADS)

    Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei

    2017-03-01

    Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.

  19. Fault detection and isolation of high temperature proton exchange membrane fuel cell stack under the influence of degradation

    NASA Astrophysics Data System (ADS)

    Jeppesen, Christian; Araya, Samuel Simon; Sahlin, Simon Lennart; Thomas, Sobi; Andreasen, Søren Juhl; Kær, Søren Knudsen

    2017-08-01

    This study proposes a data-drive impedance-based methodology for fault detection and isolation of low and high cathode stoichiometry, high CO concentration in the anode gas, high methanol vapour concentrations in the anode gas and low anode stoichiometry, for high temperature PEM fuel cells. The fault detection and isolation algorithm is based on an artificial neural network classifier, which uses three extracted features as input. Two of the proposed features are based on angles in the impedance spectrum, and are therefore relative to specific points, and shown to be independent of degradation, contrary to other available feature extraction methods in the literature. The experimental data is based on a 35 day experiment, where 2010 unique electrochemical impedance spectroscopy measurements were recorded. The test of the algorithm resulted in a good detectability of the faults, except for high methanol vapour concentration in the anode gas fault, which was found to be difficult to distinguish from a normal operational data. The achieved accuracy for faults related to CO pollution, anode- and cathode stoichiometry is 100% success rate. Overall global accuracy on the test data is 94.6%.

  20. Dental panoramic image analysis for enhancement biomarker of mandibular condyle for osteoporosis early detection

    NASA Astrophysics Data System (ADS)

    Suprijanto; Azhari; Juliastuti, E.; Septyvergy, A.; Setyagar, N. P. P.

    2016-03-01

    Osteoporosis is a degenerative disease characterized by low Bone Mineral Density (BMD). Currently, a BMD level is determined by Dual Energy X-ray Absorptiometry (DXA) at the lumbar vertebrae and femur. Previous studies reported that dental panoramic radiography image has potential information for early osteoporosis detection. This work reported alternative scheme, that consists of the determination of the Region of Interest (ROI) the condyle mandibular in the image as biomarker and feature extraction from ROI and classification of bone conditions. The minimum value of intensity in the cavity area is used to compensate an offset on the ROI. For feature extraction, the fraction of intensity values in the ROI that represent high bone density and the ROI total area is perfomed. The classification will be evaluated from the ability of each feature and its combinations for the BMD detection in 2 classes (normal and abnormal), with the artificial neural network method. The evaluation system used 105 panoramic image data from menopause women which consist of 36 training data and 69 test data that were divided into 2 classes. The 2 classes of classification obtained 88.0% accuracy rate and 88.0% sensitivity rate.

  1. Evaluation of effectiveness of wavelet based denoising schemes using ANN and SVM for bearing condition classification.

    PubMed

    Vijay, G S; Kumar, H S; Srinivasa Pai, P; Sriram, N S; Rao, Raj B K N

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

  2. A novel approach for detection and classification of mammographic microcalcifications using wavelet analysis and extreme learning machine.

    PubMed

    Malar, E; Kandaswamy, A; Chakravarthy, D; Giri Dharan, A

    2012-09-01

    The objective of this paper is to reveal the effectiveness of wavelet based tissue texture analysis for microcalcification detection in digitized mammograms using Extreme Learning Machine (ELM). Microcalcifications are tiny deposits of calcium in the breast tissue which are potential indicators for early detection of breast cancer. The dense nature of the breast tissue and the poor contrast of the mammogram image prohibit the effectiveness in identifying microcalcifications. Hence, a new approach to discriminate the microcalcifications from the normal tissue is done using wavelet features and is compared with different feature vectors extracted using Gray Level Spatial Dependence Matrix (GLSDM) and Gabor filter based techniques. A total of 120 Region of Interests (ROIs) extracted from 55 mammogram images of mini-Mias database, including normal and microcalcification images are used in the current research. The network is trained with the above mentioned features and the results denote that ELM produces relatively better classification accuracy (94%) with a significant reduction in training time than the other artificial neural networks like Bayesnet classifier, Naivebayes classifier, and Support Vector Machine. ELM also avoids problems like local minima, improper learning rate, and over fitting. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Label-free visualization of ultrastructural features of artificial synapses via cryo-EM.

    PubMed

    Gopalakrishnan, Gopakumar; Yam, Patricia T; Madwar, Carolin; Bostina, Mihnea; Rouiller, Isabelle; Colman, David R; Lennox, R Bruce

    2011-12-21

    The ultrastructural details of presynapses formed between artificial substrates of submicrometer silica beads and hippocampal neurons are visualized via cryo-electron microscopy (cryo-EM). The silica beads are derivatized by poly-d-lysine or lipid bilayers. Molecular features known to exist at presynapses are clearly present at these artificial synapses, as visualized by cryo-EM. Key synaptic features such as the membrane contact area at synaptic junctions, the presynaptic bouton containing presynaptic vesicles, as well as microtubular structures can be identified. This is the first report of the direct, label-free observation of ultrastructural details of artificial synapses.

  4. High Resolution SAR Imaging Employing Geometric Features for Extracting Seismic Damage of Buildings

    NASA Astrophysics Data System (ADS)

    Cui, L. P.; Wang, X. P.; Dou, A. X.; Ding, X.

    2018-04-01

    Synthetic Aperture Radar (SAR) image is relatively easy to acquire but difficult for interpretation. This paper probes how to identify seismic damage of building using geometric features of SAR. The SAR imaging geometric features of buildings, such as the high intensity layover, bright line induced by double bounce backscattering and dark shadow is analysed, and show obvious differences texture features of homogeneity, similarity and entropy in combinatorial imaging geometric regions between the un-collapsed and collapsed buildings in airborne SAR images acquired in Yushu city damaged by 2010 Ms7.1 Yushu, Qinghai, China earthquake, which implicates a potential capability to discriminate collapsed and un-collapsed buildings from SAR image. Study also shows that the proportion of highlight (layover & bright line) area (HA) is related to the seismic damage degree, thus a SAR image damage index (SARDI), which related to the ratio of HA to the building occupation are of building in a street block (SA), is proposed. While HA is identified through feature extraction with high-pass and low-pass filtering of SAR image in frequency domain. A partial region with 58 natural street blocks in the Yushu City are selected as study area. Then according to the above method, HA is extracted, SARDI is then calculated and further classified into 3 classes. The results show effective through validation check with seismic damage classes interpreted artificially from post-earthquake airborne high resolution optical image, which shows total classification accuracy 89.3 %, Kappa coefficient 0.79 and identical to the practical seismic damage distribution. The results are also compared and discussed with the building damage identified from SAR image available by other authors.

  5. Investigation on the use of artificial neural networks to overcome the effects of environmental and operational changes on guided waves monitoring

    NASA Astrophysics Data System (ADS)

    El Mountassir, M.; Yaacoubi, S.; Dahmene, F.

    2015-07-01

    Intelligent feature extraction and advanced signal processing techniques are necessary for a better interpretation of ultrasonic guided waves signals either in structural health monitoring (SHM) or in nondestructive testing (NDT). Such signals are characterized by at least multi-modal and dispersive components. In addition, in SHM, these signals are closely vulnerable to environmental and operational conditions (EOCs), and can be severely affected. In this paper we investigate the use of Artificial Neural Network (ANN) to overcome these effects and to provide a reliable damage detection method with a minimal of false indications. An experimental case of study (full scale pipe) is presented. Damages sizes have been increased and their shapes modified in different steps. Various parameters such as the number of inputs and the number of hidden neurons were studied to find the optimal configuration of the neural network.

  6. Crack orientation and depth estimation in a low-pressure turbine disc using a phased array ultrasonic transducer and an artificial neural network.

    PubMed

    Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang

    2013-09-13

    Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks.

  7. Crack Orientation and Depth Estimation in a Low-Pressure Turbine Disc Using a Phased Array Ultrasonic Transducer and an Artificial Neural Network

    PubMed Central

    Yang, Xiaoxia; Chen, Shili; Jin, Shijiu; Chang, Wenshuang

    2013-01-01

    Stress corrosion cracks (SCC) in low-pressure steam turbine discs are serious hidden dangers to production safety in the power plants, and knowing the orientation and depth of the initial cracks is essential for the evaluation of the crack growth rate, propagation direction and working life of the turbine disc. In this paper, a method based on phased array ultrasonic transducer and artificial neural network (ANN), is proposed to estimate both the depth and orientation of initial cracks in the turbine discs. Echo signals from cracks with different depths and orientations were collected by a phased array ultrasonic transducer, and the feature vectors were extracted by wavelet packet, fractal technology and peak amplitude methods. The radial basis function (RBF) neural network was investigated and used in this application. The final results demonstrated that the method presented was efficient in crack estimation tasks. PMID:24064602

  8. Unsupervised Feature Learning With Winner-Takes-All Based STDP

    PubMed Central

    Ferré, Paul; Mamalet, Franck; Thorpe, Simon J.

    2018-01-01

    We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods. PMID:29674961

  9. Computer vision system for egg volume prediction using backpropagation neural network

    NASA Astrophysics Data System (ADS)

    Siswantoro, J.; Hilman, M. Y.; Widiasri, M.

    2017-11-01

    Volume is one of considered aspects in egg sorting process. A rapid and accurate volume measurement method is needed to develop an egg sorting system. Computer vision system (CVS) provides a promising solution for volume measurement problem. Artificial neural network (ANN) has been used to predict the volume of egg in several CVSs. However, volume prediction from ANN could have less accuracy due to inappropriate input features or inappropriate ANN structure. This paper proposes a CVS for predicting the volume of egg using ANN. The CVS acquired an image of egg from top view and then processed the image to extract its 1D and 2 D size features. The features were used as input for ANN in predicting the volume of egg. The experiment results show that the proposed CSV can predict the volume of egg with a good accuracy and less computation time.

  10. Augmenting Satellite Precipitation Estimation with Lightning Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahrooghy, Majid; Anantharaj, Valentine G; Younan, Nicolas H.

    2013-01-01

    We have used lightning information to augment the Precipitation Estimation from Remotely Sensed Imagery using an Artificial Neural Network - Cloud Classification System (PERSIANN-CCS). Co-located lightning data are used to segregate cloud patches, segmented from GOES-12 infrared data, into either electrified (EL) or non-electrified (NEL) patches. A set of features is extracted separately for the EL and NEL cloud patches. The features for the EL cloud patches include new features based on the lightning information. The cloud patches are classified and clustered using self-organizing maps (SOM). Then brightness temperature and rain rate (T-R) relationships are derived for the different clusters.more » Rain rates are estimated for the cloud patches based on their representative T-R relationship. The Equitable Threat Score (ETS) for daily precipitation estimates is improved by almost 12% for the winter season. In the summer, no significant improvements in ETS are noted.« less

  11. BagReg: Protein inference through machine learning.

    PubMed

    Zhao, Can; Liu, Dao; Teng, Ben; He, Zengyou

    2015-08-01

    Protein inference from the identified peptides is of primary importance in the shotgun proteomics. The target of protein inference is to identify whether each candidate protein is truly present in the sample. To date, many computational methods have been proposed to solve this problem. However, there is still no method that can fully utilize the information hidden in the input data. In this article, we propose a learning-based method named BagReg for protein inference. The method firstly artificially extracts five features from the input data, and then chooses each feature as the class feature to separately build models to predict the presence probabilities of proteins. Finally, the weak results from five prediction models are aggregated to obtain the final result. We test our method on six public available data sets. The experimental results show that our method is superior to the state-of-the-art protein inference algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    NASA Astrophysics Data System (ADS)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  13. An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.

    PubMed

    Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D

    2016-05-01

    Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be avoided. Data, extraction algorithms and evaluation routines were released as part of the fecgsyn toolbox on Physionet under an GNU GPL open-source license. This contribution provides a standard framework for benchmarking and regulatory testing of NI-FECG extraction algorithms.

  14. Appearance and characterization of fruit image textures for quality sorting using wavelet transform and genetic algorithms.

    PubMed

    Khoje, Suchitra

    2018-02-01

    Images of four qualities of mangoes and guavas are evaluated for color and textural features to characterize and classify them, and to model the fruit appearance grading. The paper discusses three approaches to identify most discriminating texture features of both the fruits. In the first approach, fruit's color and texture features are selected using Mahalanobis distance. A total of 20 color features and 40 textural features are extracted for analysis. Using Mahalanobis distance and feature intercorrelation analyses, one best color feature (mean of a* [L*a*b* color space]) and two textural features (energy a*, contrast of H*) are selected as features for Guava while two best color features (R std, H std) and one textural features (energy b*) are selected as features for mangoes with the highest discriminate power. The second approach studies some common wavelet families for searching the best classification model for fruit quality grading. The wavelet features extracted from five basic mother wavelets (db, bior, rbior, Coif, Sym) are explored to characterize fruits texture appearance. In third approach, genetic algorithm is used to select only those color and wavelet texture features that are relevant to the separation of the class, from a large universe of features. The study shows that image color and texture features which were identified using a genetic algorithm can distinguish between various qualities classes of fruits. The experimental results showed that support vector machine classifier is elected for Guava grading with an accuracy of 97.61% and artificial neural network is elected from Mango grading with an accuracy of 95.65%. The proposed method is nondestructive fruit quality assessment method. The experimental results has proven that Genetic algorithm along with wavelet textures feature has potential to discriminate fruit quality. Finally, it can be concluded that discussed method is an accurate, reliable, and objective tool to determine fruit quality namely Mango and Guava, and might be applicable to in-line sorting systems. © 2017 Wiley Periodicals, Inc.

  15. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    PubMed

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time. It is found that the proposed ML based sentence extraction techniques and the composite feature set used with RNN as classifier outperform all other approaches. By using ANN in FF form as feature extractor, the performance of the system is evaluated and a comparison is made. Experimental results show that the application of big data samples has enhanced the learning of the ASR system. Further, the ANN based sample and feature extraction techniques are found to be efficient enough to enable application of ML techniques in big data aspects as part of ASR systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Object-Oriented Analysis of Satellite Images Using Artificial Neural Networks for Post-Earthquake Buildings Change Detection

    NASA Astrophysics Data System (ADS)

    Khodaverdi zahraee, N.; Rastiveis, H.

    2017-09-01

    Earthquake is one of the most divesting natural events that threaten human life during history. After the earthquake, having information about the damaged area, the amount and type of damage can be a great help in the relief and reconstruction for disaster managers. It is very important that these measures should be taken immediately after the earthquake because any negligence could be more criminal losses. The purpose of this paper is to propose and implement an automatic approach for mapping destructed buildings after an earthquake using pre- and post-event high resolution satellite images. In the proposed method after preprocessing, segmentation of both images is performed using multi-resolution segmentation technique. Then, the segmentation results are intersected with ArcGIS to obtain equal image objects on both images. After that, appropriate textural features, which make a better difference between changed or unchanged areas, are calculated for all the image objects. Finally, subtracting the extracted textural features from pre- and post-event images, obtained values are applied as an input feature vector in an artificial neural network for classifying the area into two classes of changed and unchanged areas. The proposed method was evaluated using WorldView2 satellite images, acquired before and after the 2010 Haiti earthquake. The reported overall accuracy of 93% proved the ability of the proposed method for post-earthquake buildings change detection.

  17. Prediction of the Passive Intestinal Absorption of Medicinal Plant Extract Constituents with the Parallel Artificial Membrane Permeability Assay (PAMPA).

    PubMed

    Petit, Charlotte; Bujard, Alban; Skalicka-Woźniak, Krystyna; Cretton, Sylvian; Houriet, Joëlle; Christen, Philippe; Carrupt, Pierre-Alain; Wolfender, Jean-Luc

    2016-03-01

    At the early drug discovery stage, the high-throughput parallel artificial membrane permeability assay is one of the most frequently used in vitro models to predict transcellular passive absorption. While thousands of new chemical entities have been screened with the parallel artificial membrane permeability assay, in general, permeation properties of natural products have been scarcely evaluated. In this study, the parallel artificial membrane permeability assay through a hexadecane membrane was used to predict the passive intestinal absorption of a representative set of frequently occurring natural products. Since natural products are usually ingested for medicinal use as components of complex extracts in traditional herbal preparations or as phytopharmaceuticals, the applicability of such an assay to study the constituents directly in medicinal crude plant extracts was further investigated. Three representative crude plant extracts with different natural product compositions were chosen for this study. The first extract was composed of furanocoumarins (Angelica archangelica), the second extract included alkaloids (Waltheria indica), and the third extract contained flavonoid glycosides (Pueraria montana var. lobata). For each medicinal plant, the effective passive permeability values Pe (cm/s) of the main natural products of interest were rapidly calculated thanks to a generic ultrahigh-pressure liquid chromatography-UV detection method and because Pe calculations do not require knowing precisely the concentration of each natural product within the extracts. The original parallel artificial membrane permeability assay through a hexadecane membrane was found to keep its predictive power when applied to constituents directly in crude plant extracts provided that higher quantities of the extract were initially loaded in the assay in order to ensure suitable detection of the individual constituents of the extracts. Such an approach is thus valuable for the high-throughput, cost-effective, and early evaluation of passive intestinal absorption of active principles in medicinal plants. In phytochemical studies, obtaining effective passive permeability values of pharmacologically active natural products is important to predict if natural products showing interesting activities in vitro may have a chance to reach their target in vivo. Georg Thieme Verlag KG Stuttgart · New York.

  18. Texture classification of lung computed tomography images

    NASA Astrophysics Data System (ADS)

    Pheng, Hang See; Shamsuddin, Siti M.

    2013-03-01

    Current development of algorithms in computer-aided diagnosis (CAD) scheme is growing rapidly to assist the radiologist in medical image interpretation. Texture analysis of computed tomography (CT) scans is one of important preliminary stage in the computerized detection system and classification for lung cancer. Among different types of images features analysis, Haralick texture with variety of statistical measures has been used widely in image texture description. The extraction of texture feature values is essential to be used by a CAD especially in classification of the normal and abnormal tissue on the cross sectional CT images. This paper aims to compare experimental results using texture extraction and different machine leaning methods in the classification normal and abnormal tissues through lung CT images. The machine learning methods involve in this assessment are Artificial Immune Recognition System (AIRS), Naive Bayes, Decision Tree (J48) and Backpropagation Neural Network. AIRS is found to provide high accuracy (99.2%) and sensitivity (98.0%) in the assessment. For experiments and testing purpose, publicly available datasets in the Reference Image Database to Evaluate Therapy Response (RIDER) are used as study cases.

  19. Morphological learning in a novel language: A cross-language comparison.

    PubMed

    Havas, Viktória; Waris, Otto; Vaquero, Lucía; Rodríguez-Fornells, Antoni; Laine, Matti

    2015-01-01

    Being able to extract and interpret the internal structure of complex word forms such as the English word dance+r+s is crucial for successful language learning. We examined whether the ability to extract morphological information during word learning is affected by the morphological features of one's native tongue. Spanish and Finnish adult participants performed a word-picture associative learning task in an artificial language where the target words included a suffix marking the gender of the corresponding animate object. The short exposure phase was followed by a word recognition task and a generalization task for the suffix. The participants' native tongues vary greatly in terms of morphological structure, leading to two opposing hypotheses. On the one hand, Spanish speakers may be more effective in identifying gender in a novel language because this feature is present in Spanish but not in Finnish. On the other hand, Finnish speakers may have an advantage as the abundance of bound morphemes in their language calls for continuous morphological decomposition. The results support the latter alternative, suggesting that lifelong experience on morphological decomposition provides an advantage in novel morphological learning.

  20. Derivation of an artificial gene to improve classification accuracy upon gene selection.

    PubMed

    Seo, Minseok; Oh, Sejong

    2012-02-01

    Classification analysis has been developed continuously since 1936. This research field has advanced as a result of development of classifiers such as KNN, ANN, and SVM, as well as through data preprocessing areas. Feature (gene) selection is required for very high dimensional data such as microarray before classification work. The goal of feature selection is to choose a subset of informative features that reduces processing time and provides higher classification accuracy. In this study, we devised a method of artificial gene making (AGM) for microarray data to improve classification accuracy. Our artificial gene was derived from a whole microarray dataset, and combined with a result of gene selection for classification analysis. We experimentally confirmed a clear improvement of classification accuracy after inserting artificial gene. Our artificial gene worked well for popular feature (gene) selection algorithms and classifiers. The proposed approach can be applied to any type of high dimensional dataset. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    NASA Astrophysics Data System (ADS)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  2. Regional shape-based feature space for segmenting biomedical images using neural networks

    NASA Astrophysics Data System (ADS)

    Sundaramoorthy, Gopal; Hoford, John D.; Hoffman, Eric A.

    1993-07-01

    In biomedical images, structure of interest, particularly the soft tissue structures, such as the heart, airways, bronchial and arterial trees often have grey-scale and textural characteristics similar to other structures in the image, making it difficult to segment them using only gray- scale and texture information. However, these objects can be visually recognized by their unique shapes and sizes. In this paper we discuss, what we believe to be, a novel, simple scheme for extracting features based on regional shapes. To test the effectiveness of these features for image segmentation (classification), we use an artificial neural network and a statistical cluster analysis technique. The proposed shape-based feature extraction algorithm computes regional shape vectors (RSVs) for all pixels that meet a certain threshold criteria. The distance from each such pixel to a boundary is computed in 8 directions (or in 26 directions for a 3-D image). Together, these 8 (or 26) values represent the pixel's (or voxel's) RSV. All RSVs from an image are used to train a multi-layered perceptron neural network which uses these features to 'learn' a suitable classification strategy. To clearly distinguish the desired object from other objects within an image, several examples from inside and outside the desired object are used for training. Several examples are presented to illustrate the strengths and weaknesses of our algorithm. Both synthetic and actual biomedical images are considered. Future extensions to this algorithm are also discussed.

  3. Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network

    PubMed Central

    He, Jun; Yang, Shixi; Gan, Chunbiao

    2017-01-01

    Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods. PMID:28677638

  4. Unsupervised Fault Diagnosis of a Gear Transmission Chain Using a Deep Belief Network.

    PubMed

    He, Jun; Yang, Shixi; Gan, Chunbiao

    2017-07-04

    Artificial intelligence (AI) techniques, which can effectively analyze massive amounts of fault data and automatically provide accurate diagnosis results, have been widely applied to fault diagnosis of rotating machinery. Conventional AI methods are applied using features selected by a human operator, which are manually extracted based on diagnostic techniques and field expertise. However, developing robust features for each diagnostic purpose is often labour-intensive and time-consuming, and the features extracted for one specific task may be unsuitable for others. In this paper, a novel AI method based on a deep belief network (DBN) is proposed for the unsupervised fault diagnosis of a gear transmission chain, and the genetic algorithm is used to optimize the structural parameters of the network. Compared to the conventional AI methods, the proposed method can adaptively exploit robust features related to the faults by unsupervised feature learning, thus requires less prior knowledge about signal processing techniques and diagnostic expertise. Besides, it is more powerful at modelling complex structured data. The effectiveness of the proposed method is validated using datasets from rolling bearings and gearbox. To show the superiority of the proposed method, its performance is compared with two well-known classifiers, i.e., back propagation neural network (BPNN) and support vector machine (SVM). The fault classification accuracies are 99.26% for rolling bearings and 100% for gearbox when using the proposed method, which are much higher than that of the other two methods.

  5. Intelligent Color Vision System for Ripeness Classification of Oil Palm Fresh Fruit Bunch

    PubMed Central

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Halim, Zaini Abdul; Ibrahim, Haidi; Ali, Syed Salim Syed

    2012-01-01

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category. PMID:23202043

  6. Intelligent color vision system for ripeness classification of oil palm fresh fruit bunch.

    PubMed

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Abdul Halim, Zaini; Ibrahim, Haidi; Syed Ali, Syed Salim

    2012-10-22

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category.

  7. An artificial intelligence based improved classification of two-phase flow patterns with feature extracted from acquired images.

    PubMed

    Shanthi, C; Pappa, N

    2017-05-01

    Flow pattern recognition is necessary to select design equations for finding operating details of the process and to perform computational simulations. Visual image processing can be used to automate the interpretation of patterns in two-phase flow. In this paper, an attempt has been made to improve the classification accuracy of the flow pattern of gas/ liquid two- phase flow using fuzzy logic and Support Vector Machine (SVM) with Principal Component Analysis (PCA). The videos of six different types of flow patterns namely, annular flow, bubble flow, churn flow, plug flow, slug flow and stratified flow are recorded for a period and converted to 2D images for processing. The textural and shape features extracted using image processing are applied as inputs to various classification schemes namely fuzzy logic, SVM and SVM with PCA in order to identify the type of flow pattern. The results obtained are compared and it is observed that SVM with features reduced using PCA gives the better classification accuracy and computationally less intensive than other two existing schemes. This study results cover industrial application needs including oil and gas and any other gas-liquid two-phase flows. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. In vitro biocompatibility of EPM and EPDM rubbers.

    PubMed

    Mast, F; Hoschtitzky, J A; Van Blitterswijk, C A; Huysmans, H A

    1997-01-01

    The in vitro toxicity of two EPDM rubbers (K 778 and K 4802) and one EPM rubber (K 740) was tested using human fibroblasts. The modulus of elasticity of each rubber was varied by exposure to different amounts of electron-beam radiation (0, 5 and 10 Mrad). The short-term in vitro toxicity was tested by culturing cells on polymer films. The long-term effect of ageing was simulated by growing fibroblasts in nutrient media prepared from extracts of heat-exposed materials. Cell cultures were studied both quantitatively and (ultra) structurally. Growth curves obtained in the toxicity test did not differ significantly from control values at any day of observation, and also showed that electron-beam radiation did not alter the biocompatibility. The same results were found for all but one material in the artificial ageing test. The number of cells in the K4802/10 Mrad extraction medium was decreased. Ultrastructurally no gross deviations from normal morphology were observed, either in the direct contact test or in the artificial ageing test. The most characteristic feature was a somewhat dilated endoplasmic reticulum. In summary, the in vitro biocompatibility of EPDM-rubbers as observed in this study is satisfactory and motivates further investigation of their biocompatibility in animal experiments.

  9. Statistical interpretation of machine learning-based feature importance scores for biomarker discovery.

    PubMed

    Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre

    2012-07-01

    Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.

  10. Towards automatic musical instrument timbre recognition

    NASA Astrophysics Data System (ADS)

    Park, Tae Hong

    This dissertation is comprised of two parts---focus on issues concerning research and development of an artificial system for automatic musical instrument timbre recognition and musical compositions. The technical part of the essay includes a detailed record of developed and implemented algorithms for feature extraction and pattern recognition. A review of existing literature introducing historical aspects surrounding timbre research, problems associated with a number of timbre definitions, and highlights of selected research activities that have had significant impact in this field are also included. The developed timbre recognition system follows a bottom-up, data-driven model that includes a pre-processing module, feature extraction module, and a RBF/EBF (Radial/Elliptical Basis Function) neural network-based pattern recognition module. 829 monophonic samples from 12 instruments have been chosen from the Peter Siedlaczek library (Best Service) and other samples from the Internet and personal collections. Significant emphasis has been put on feature extraction development and testing to achieve robust and consistent feature vectors that are eventually passed to the neural network module. In order to avoid a garbage-in-garbage-out (GIGO) trap and improve generality, extra care was taken in designing and testing the developed algorithms using various dynamics, different playing techniques, and a variety of pitches for each instrument with inclusion of attack and steady-state portions of a signal. Most of the research and development was conducted in Matlab. The compositional part of the essay includes brief introductions to "A d'Ess Are ," "Aboji," "48 13 N, 16 20 O," and "pH-SQ." A general outline pertaining to the ideas and concepts behind the architectural designs of the pieces including formal structures, time structures, orchestration methods, and pitch structures are also presented.

  11. Segmentation, feature extraction, and multiclass brain tumor classification.

    PubMed

    Sachdeva, Jainy; Kumar, Vinod; Gupta, Indra; Khandelwal, Niranjan; Ahuja, Chirag Kamal

    2013-12-01

    Multiclass brain tumor classification is performed by using a diversified dataset of 428 post-contrast T1-weighted MR images from 55 patients. These images are of primary brain tumors namely astrocytoma (AS), glioblastoma multiforme (GBM), childhood tumor-medulloblastoma (MED), meningioma (MEN), secondary tumor-metastatic (MET), and normal regions (NR). Eight hundred fifty-six regions of interest (SROIs) are extracted by a content-based active contour model. Two hundred eighteen intensity and texture features are extracted from these SROIs. In this study, principal component analysis (PCA) is used for reduction of dimensionality of the feature space. These six classes are then classified by artificial neural network (ANN). Hence, this approach is named as PCA-ANN approach. Three sets of experiments have been performed. In the first experiment, classification accuracy by ANN approach is performed. In the second experiment, PCA-ANN approach with random sub-sampling has been used in which the SROIs from the same patient may get repeated during testing. It is observed that the classification accuracy has increased from 77 to 91 %. PCA-ANN has delivered high accuracy for each class: AS-90.74 %, GBM-88.46 %, MED-85 %, MEN-90.70 %, MET-96.67 %, and NR-93.78 %. In the third experiment, to remove bias and to test the robustness of the proposed system, data is partitioned in a manner such that the SROIs from the same patient are not common for training and testing sets. In this case also, the proposed system has performed well by delivering an overall accuracy of 85.23 %. The individual class accuracy for each class is: AS-86.15 %, GBM-65.1 %, MED-63.36 %, MEN-91.5 %, MET-65.21 %, and NR-93.3 %. A computer-aided diagnostic system comprising of developed methods for segmentation, feature extraction, and classification of brain tumors can be beneficial to radiologists for precise localization, diagnosis, and interpretation of brain tumors on MR images.

  12. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  13. Perception of artificial conspecifics by bearded dragons (Pogona vitticeps).

    PubMed

    Frohnwieser, Anna; Pike, Thomas W; Murray, John C; Wilkinson, Anna

    2018-01-09

    Artificial animals are increasingly used as conspecific stimuli in animal behavior research. However, researchers often have an incomplete understanding of how the species under study perceives conspecifics, and hence which features needed for a stimulus to be perceived appropriately. To investigate the features to which bearded dragons (Pogona vitticeps) attend, we measured their lateralized eye use when assessing a successive range of stimuli. These ranged through several stages of realism in artificial conspecifics, to see how features such as color, the presence of eyes, body shape and motion influence behavior. We found differences in lateralized eye use depending on the sex of the observing bearded dragon and the artificial conspecific, as well as the artificial conspecific's behavior. Therefore, this approach can inform the design of robotic animals that elicit biologically-meaningful responses in live animals. This article is protected by copyright. All rights reserved.

  14. Tuberculosis disease diagnosis using artificial immune recognition system.

    PubMed

    Shamshirband, Shahaboddin; Hessam, Somayeh; Javidnia, Hossein; Amiribesheli, Mohsen; Vahdat, Shaghayegh; Petković, Dalibor; Gani, Abdullah; Kiah, Miss Laiha Mat

    2014-01-01

    There is a high risk of tuberculosis (TB) disease diagnosis among conventional methods. This study is aimed at diagnosing TB using hybrid machine learning approaches. Patient epicrisis reports obtained from the Pasteur Laboratory in the north of Iran were used. All 175 samples have twenty features. The features are classified based on incorporating a fuzzy logic controller and artificial immune recognition system. The features are normalized through a fuzzy rule based on a labeling system. The labeled features are categorized into normal and tuberculosis classes using the Artificial Immune Recognition Algorithm. Overall, the highest classification accuracy reached was for the 0.8 learning rate (α) values. The artificial immune recognition system (AIRS) classification approaches using fuzzy logic also yielded better diagnosis results in terms of detection accuracy compared to other empirical methods. Classification accuracy was 99.14%, sensitivity 87.00%, and specificity 86.12%.

  15. Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets.

    PubMed

    McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne

    2018-04-01

    Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Geometrical features assessment of liver's tumor with application of artificial neural network evolved by imperialist competitive algorithm.

    PubMed

    Keshavarz, M; Mojra, A

    2015-05-01

    Geometrical features of a cancerous tumor embedded in biological soft tissue, including tumor size and depth, are a necessity in the follow-up procedure and making suitable therapeutic decisions. In this paper, a new socio-politically motivated global search strategy which is called imperialist competitive algorithm (ICA) is implemented to train a feed forward neural network (FFNN) to estimate the tumor's geometrical characteristics (FFNNICA). First, a viscoelastic model of liver tissue is constructed by using a series of in vitro uniaxial and relaxation test data. Then, 163 samples of the tissue including a tumor with different depths and diameters are generated by making use of PYTHON programming to link the ABAQUS and MATLAB together. Next, the samples are divided into 123 samples as training dataset and 40 samples as testing dataset. Training inputs of the network are mechanical parameters extracted from palpation of the tissue through a developing noninvasive technology called artificial tactile sensing (ATS). Last, to evaluate the FFNNICA performance, outputs of the network including tumor's depth and diameter are compared with desired values for both training and testing datasets. Deviations of the outputs from desired values are calculated by a regression analysis. Statistical analysis is also performed by measuring Root Mean Square Error (RMSE) and Efficiency (E). RMSE in diameter and depth estimations are 0.50 mm and 1.49, respectively, for the testing dataset. Results affirm that the proposed optimization algorithm for training neural network can be useful to characterize soft tissue tumors accurately by employing an artificial palpation approach. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Convolutional neural network for high-accuracy functional near-infrared spectroscopy in a brain-computer interface: three-class classification of rest, right-, and left-hand motor execution.

    PubMed

    Trakoolwilaiwan, Thanawin; Behboodi, Bahareh; Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong

    2018-01-01

    The aim of this work is to develop an effective brain-computer interface (BCI) method based on functional near-infrared spectroscopy (fNIRS). In order to improve the performance of the BCI system in terms of accuracy, the ability to discriminate features from input signals and proper classification are desired. Previous studies have mainly extracted features from the signal manually, but proper features need to be selected carefully. To avoid performance degradation caused by manual feature selection, we applied convolutional neural networks (CNNs) as the automatic feature extractor and classifier for fNIRS-based BCI. In this study, the hemodynamic responses evoked by performing rest, right-, and left-hand motor execution tasks were measured on eight healthy subjects to compare performances. Our CNN-based method provided improvements in classification accuracy over conventional methods employing the most commonly used features of mean, peak, slope, variance, kurtosis, and skewness, classified by support vector machine (SVM) and artificial neural network (ANN). Specifically, up to 6.49% and 3.33% improvement in classification accuracy was achieved by CNN compared with SVM and ANN, respectively.

  18. Initial development of a computer-aided diagnosis tool for solitary pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Catarious, David M., Jr.; Baydush, Alan H.; Floyd, Carey E., Jr.

    2001-07-01

    This paper describes the development of a computer-aided diagnosis (CAD) tool for solitary pulmonary nodules. This CAD tool is built upon physically meaningful features that were selected because of their relevance to shape and texture. These features included a modified version of the Hotelling statistic (HS), a channelized HS, three measures of fractal properties, two measures of spicularity, and three manually measured shape features. These features were measured from a difficult database consisting of 237 regions of interest (ROIs) extracted from digitized chest radiographs. The center of each 256x256 pixel ROI contained a suspicious lesion which was sent to follow-up by a radiologist and whose nature was later clinically determined. Linear discriminant analysis (LDA) was used to search the feature space via sequential forward search using percentage correct as the performance metric. An optimized feature subset, selected for the highest accuracy, was then fed into a three layer artificial neural network (ANN). The ANN's performance was assessed by receiver operating characteristic (ROC) analysis. A leave-one-out testing/training methodology was employed for the ROC analysis. The performance of this system is competitive with that of three radiologists on the same database.

  19. Comparison between wavelet and wavelet packet transform features for classification of faults in distribution system

    NASA Astrophysics Data System (ADS)

    Arvind, Pratul

    2012-11-01

    The ability to identify and classify all ten types of faults in a distribution system is an important task for protection engineers. Unlike transmission system, distribution systems have a complex configuration and are subjected to frequent faults. In the present work, an algorithm has been developed for identifying all ten types of faults in a distribution system by collecting current samples at the substation end. The samples are subjected to wavelet packet transform and artificial neural network in order to yield better classification results. A comparison of results between wavelet transform and wavelet packet transform is also presented thereby justifying the feature extracted from wavelet packet transform yields promising results. It should also be noted that current samples are collected after simulating a 25kv distribution system in PSCAD software.

  20. Classification of burst and suppression in the neonatal electroencephalogram

    NASA Astrophysics Data System (ADS)

    Löfhede, J.; Löfgren, N.; Thordstein, M.; Flisberg, A.; Kjellmer, I.; Lindecrantz, K.

    2008-12-01

    Fisher's linear discriminant (FLD), a feed-forward artificial neural network (ANN) and a support vector machine (SVM) were compared with respect to their ability to distinguish bursts from suppressions in electroencephalograms (EEG) displaying a burst-suppression pattern. Five features extracted from the EEG were used as inputs. The study was based on EEG signals from six full-term infants who had suffered from perinatal asphyxia, and the methods have been trained with reference data classified by an experienced electroencephalographer. The results are summarized as the area under the curve (AUC), derived from receiver operating characteristic (ROC) curves for the three methods. Based on this, the SVM performs slightly better than the others. Testing the three methods with combinations of increasing numbers of the five features shows that the SVM handles the increasing amount of information better than the other methods.

  1. Evaluation of the safety and efficacy of Glycyrrhiza uralensis root extracts produced using artificial hydroponic and artificial hydroponic-field hybrid cultivation systems.

    PubMed

    Akiyama, H; Nose, M; Ohtsuki, N; Hisaka, S; Takiguchi, H; Tada, A; Sugimoto, N; Fuchino, H; Inui, T; Kawano, N; Hayashi, S; Hishida, A; Kudo, T; Sugiyama, K; Abe, Y; Mutsuga, M; Kawahara, N; Yoshimatsu, K

    2017-01-01

    Glycyrrhiza uralensis roots used in this study were produced using novel cultivation systems, including artificial hydroponics and artificial hydroponic-field hybrid cultivation. The equivalency between G. uralensis root extracts produced by hydroponics and/or hybrid cultivation and a commercial Glycyrrhiza crude drug were evaluated for both safety and efficacy, and there were no significant differences in terms of mutagenicity on the Ames tests. The levels of cadmium and mercury in both hydroponic roots and crude drugs were less than the limit of quantitation. Arsenic levels were lower in all hydroponic roots than in the crude drug, whereas mean lead levels in the crude drug were not significantly different from those in the hydroponically cultivated G. uralensis roots. Both hydroponic and hybrid-cultivated root extracts showed antiallergic activities against contact hypersensitivity that were similar to those of the crude drug extracts. These study results suggest that hydroponic and hybrid-cultivated roots are equivalent in safety and efficacy to those of commercial crude drugs. Further studies are necessary before the roots are applicable as replacements for the currently available commercial crude drugs produced from wild plant resources.

  2. Applying Gradient Descent in Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  3. PREDICTION OF MALIGNANT BREAST LESIONS FROM MRI FEATURES: A COMPARISON OF ARTIFICIAL NEURAL NETWORK AND LOGISTIC REGRESSION TECHNIQUES

    PubMed Central

    McLaren, Christine E.; Chen, Wen-Pin; Nie, Ke; Su, Min-Ying

    2009-01-01

    Rationale and Objectives Dynamic contrast enhanced MRI (DCE-MRI) is a clinical imaging modality for detection and diagnosis of breast lesions. Analytical methods were compared for diagnostic feature selection and performance of lesion classification to differentiate between malignant and benign lesions in patients. Materials and Methods The study included 43 malignant and 28 benign histologically-proven lesions. Eight morphological parameters, ten gray level co-occurrence matrices (GLCM) texture features, and fourteen Laws’ texture features were obtained using automated lesion segmentation and quantitative feature extraction. Artificial neural network (ANN) and logistic regression analysis were compared for selection of the best predictors of malignant lesions among the normalized features. Results Using ANN, the final four selected features were compactness, energy, homogeneity, and Law_LS, with area under the receiver operating characteristic curve (AUC) = 0.82, and accuracy = 0.76. The diagnostic performance of these 4-features computed on the basis of logistic regression yielded AUC = 0.80 (95% CI, 0.688 to 0.905), similar to that of ANN. The analysis also shows that the odds of a malignant lesion decreased by 48% (95% CI, 25% to 92%) for every increase of 1 SD in the Law_LS feature, adjusted for differences in compactness, energy, and homogeneity. Using logistic regression with z-score transformation, a model comprised of compactness, NRL entropy, and gray level sum average was selected, and it had the highest overall accuracy of 0.75 among all models, with AUC = 0.77 (95% CI, 0.660 to 0.880). When logistic modeling of transformations using the Box-Cox method was performed, the most parsimonious model with predictors, compactness and Law_LS, had an AUC of 0.79 (95% CI, 0.672 to 0.898). Conclusion The diagnostic performance of models selected by ANN and logistic regression was similar. The analytic methods were found to be roughly equivalent in terms of predictive ability when a small number of variables were chosen. The robust ANN methodology utilizes a sophisticated non-linear model, while logistic regression analysis provides insightful information to enhance interpretation of the model features. PMID:19409817

  4. Odor Impression Prediction from Mass Spectra.

    PubMed

    Nozaki, Yuji; Nakamoto, Takamichi

    2016-01-01

    The sense of smell arises from the perception of odors from chemicals. However, the relationship between the impression of odor and the numerous physicochemical parameters has yet to be understood owing to its complexity. As such, there is no established general method for predicting the impression of odor of a chemical only from its physicochemical properties. In this study, we designed a novel predictive model based on an artificial neural network with a deep structure for predicting odor impression utilizing the mass spectra of chemicals, and we conducted a series of computational analyses to evaluate its performance. Feature vectors extracted from the original high-dimensional space using two autoencoders equipped with both input and output layers in the model are used to build a mapping function from the feature space of mass spectra to the feature space of sensory data. The results of predictions obtained by the proposed new method have notable accuracy (R≅0.76) in comparison with a conventional method (R≅0.61).

  5. Deep Learning and Its Applications in Biomedicine.

    PubMed

    Cao, Chensi; Liu, Feng; Tan, Hai; Song, Deshou; Shu, Wenjie; Li, Weizhong; Zhou, Yiming; Bo, Xiaochen; Xie, Zhi

    2018-02-01

    Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images, electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning-based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e., deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning. Copyright © 2018. Production and hosting by Elsevier B.V.

  6. Sound quality recognition using optimal wavelet-packet transform and artificial neural network methods

    NASA Astrophysics Data System (ADS)

    Xing, Y. F.; Wang, Y. S.; Shi, L.; Guo, H.; Chen, H.

    2016-01-01

    According to the human perceptional characteristics, a method combined by the optimal wavelet-packet transform and artificial neural network, so-called OWPT-ANN model, for psychoacoustical recognition is presented. Comparisons of time-frequency analysis methods are performed, and an OWPT with 21 critical bands is designed for feature extraction of a sound, as is a three-layer back-propagation ANN for sound quality (SQ) recognition. Focusing on the loudness and sharpness, the OWPT-ANN model is applied on vehicle noises under different working conditions. Experimental verifications show that the OWPT can effectively transfer a sound into a time-varying energy pattern as that in the human auditory system. The errors of loudness and sharpness of vehicle noise from the OWPT-ANN are all less than 5%, which suggest a good accuracy of the OWPT-ANN model in SQ recognition. The proposed methodology might be regarded as a promising technique for signal processing in the human-hearing related fields in engineering.

  7. Parameters Identification for Photovoltaic Module Based on an Improved Artificial Fish Swarm Algorithm

    PubMed Central

    Wang, Hong-Hua

    2014-01-01

    A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233

  8. Novel method to predict body weight in children based on age and morphological facial features.

    PubMed

    Huang, Ziyin; Barrett, Jeffrey S; Barrett, Kyle; Barrett, Ryan; Ng, Chee M

    2015-04-01

    A new and novel approach of predicting the body weight of children based on age and morphological facial features using a three-layer feed-forward artificial neural network (ANN) model is reported. The model takes in four parameters, including age-based CDC-inferred median body weight and three facial feature distances measured from digital facial images. In this study, thirty-nine volunteer subjects with age ranging from 6-18 years old and BW ranging from 18.6-96.4 kg were used for model development and validation. The final model has a mean prediction error of 0.48, a mean squared error of 18.43, and a coefficient of correlation of 0.94. The model shows significant improvement in prediction accuracy over several age-based body weight prediction methods. Combining with a facial recognition algorithm that can detect, extract and measure the facial features used in this study, mobile applications that incorporate this body weight prediction method may be developed for clinical investigations where access to scales is limited. © 2014, The American College of Clinical Pharmacology.

  9. Photometric Supernova Classification with Machine Learning

    NASA Astrophysics Data System (ADS)

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  10. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  11. Neural network diagnosis of avascular necrosis from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Manduca, Armando; Christy, Paul S.; Ehman, Richard L.

    1993-09-01

    We have explored the use of artificial neural networks to diagnose avascular necrosis (AVN) of the femoral head from magnetic resonance images. We have developed multi-layer perceptron networks, trained with conjugate gradient optimization, which diagnose AVN from single sagittal images of the femoral head with 100% accuracy on the training data and 97% accuracy on test data. These networks use only the raw image as input (with minimal preprocessing to average the images down to 32 X 32 size and to scale the input data values) and learn to extract their own features for the diagnosis decision. Various experiments with these networks are described.

  12. Extracting contours of oval-shaped objects by Hough transform and minimal path algorithms

    NASA Astrophysics Data System (ADS)

    Tleis, Mohamed; Verbeek, Fons J.

    2014-04-01

    Circular and oval-like objects are very common in cell and micro biology. These objects need to be analyzed, and to that end, digitized images from the microscope are used so as to come to an automated analysis pipeline. It is essential to detect all the objects in an image as well as to extract the exact contour of each individual object. In this manner it becomes possible to perform measurements on these objects, i.e. shape and texture features. Our measurement objective is achieved by probing contour detection through dynamic programming. In this paper we describe a method that uses Hough transform and two minimal path algorithms to detect contours of (ovoid-like) objects. These algorithms are based on an existing grey-weighted distance transform and a new algorithm to extract the circular shortest path in an image. The methods are tested on an artificial dataset of a 1000 images, with an F1-score of 0.972. In a case study with yeast cells, contours from our methods were compared with another solution using Pratt's figure of merit. Results indicate that our methods were more precise based on a comparison with a ground-truth dataset. As far as yeast cells are concerned, the segmentation and measurement results enable, in future work, to retrieve information from different developmental stages of the cell using complex features.

  13. Artificial Intelligence Project

    DTIC Science & Technology

    1990-01-01

    Artifcial Intelligence Project at The University of Texas at Austin, University of Texas at Austin, Artificial Intelligence Laboratory AITR84-01. Novak...Texas at Austin, Artificial Intelligence Laboratory A187-52, April 1987. Novak, G. "GLISP: A Lisp-Based Programming System with Data Abstraction...of Texas at Austin, Artificial Intelligence Laboratory AITR85-14.) Rim, Hae-Chang, and Simmons, R. F. "Extracting Data Base Knowledge from Medical

  14. Toward End-to-End Face Recognition Through Alignment Learning

    NASA Astrophysics Data System (ADS)

    Zhong, Yuanyi; Chen, Jiansheng; Huang, Bo

    2017-08-01

    Plenty of effective methods have been proposed for face recognition during the past decade. Although these methods differ essentially in many aspects, a common practice of them is to specifically align the facial area based on the prior knowledge of human face structure before feature extraction. In most systems, the face alignment module is implemented independently. This has actually caused difficulties in the designing and training of end-to-end face recognition models. In this paper we study the possibility of alignment learning in end-to-end face recognition, in which neither prior knowledge on facial landmarks nor artificially defined geometric transformations are required. Specifically, spatial transformer layers are inserted in front of the feature extraction layers in a Convolutional Neural Network (CNN) for face recognition. Only human identity clues are used for driving the neural network to automatically learn the most suitable geometric transformation and the most appropriate facial area for the recognition task. To ensure reproducibility, our model is trained purely on the publicly available CASIA-WebFace dataset, and is tested on the Labeled Face in the Wild (LFW) dataset. We have achieved a verification accuracy of 99.08\\% which is comparable to state-of-the-art single model based methods.

  15. Sentiment analysis: a comparison of deep learning neural network algorithm with SVM and naϊve Bayes for Indonesian text

    NASA Astrophysics Data System (ADS)

    Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia

    2018-03-01

    Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.

  16. Significance of MPEG-7 textural features for improved mass detection in mammography.

    PubMed

    Eltonsy, Nevine H; Tourassi, Georgia D; Fadeev, Aleksey; Elmaghraby, Adel S

    2006-01-01

    The purpose of the study is to investigate the significance of MPEG-7 textural features for improving the detection of masses in screening mammograms. The detection scheme was originally based on morphological directional neighborhood features extracted from mammographic regions of interest (ROIs). Receiver Operating Characteristics (ROC) was performed to evaluate the performance of each set of features independently and merged into a back-propagation artificial neural network (BPANN) using the leave-one-out sampling scheme (LOOSS). The study was based on a database of 668 mammographic ROIs (340 depicting cancer regions and 328 depicting normal parenchyma). Overall, the ROC area index of the BPANN using the directional morphological features was Az=0.85+/-0.01. The MPEG-7 edge histogram descriptor-based BPNN showed an ROC area index of Az=0.71+/-0.01 while homogeneous textural descriptors using 30 and 120 channels helped the BPNN achieve similar ROC area indexes of Az=0.882+/-0.02 and Az=0.877+/-0.01 respectively. After merging the MPEG-7 homogeneous textural features with the directional neighborhood features the performance of the BPANN increased providing an ROC area index of Az=0.91+/-0.01. MPEG-7 homogeneous textural descriptor significantly improved the morphology-based detection scheme.

  17. Correlative feature analysis on FFDM

    PubMed Central

    Yuan, Yading; Giger, Maryellen L.; Li, Hui; Sennett, Charlene

    2008-01-01

    Identifying the corresponding images of a lesion in different views is an essential step in improving the diagnostic ability of both radiologists and computer-aided diagnosis (CAD) systems. Because of the nonrigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this pilot study, we present a computerized framework that differentiates between corresponding images of the same lesion in different views and noncorresponding images, i.e., images of different lesions. A dual-stage segmentation method, which employs an initial radial gradient index (RGI) based segmentation and an active contour model, is applied to extract mass lesions from the surrounding parenchyma. Then various lesion features are automatically extracted from each of the two views of each lesion to quantify the characteristics of density, size, texture and the neighborhood of the lesion, as well as its distance to the nipple. A two-step scheme is employed to estimate the probability that the two lesion images from different mammographic views are of the same physical lesion. In the first step, a correspondence metric for each pairwise feature is estimated by a Bayesian artificial neural network (BANN). Then, these pairwise correspondence metrics are combined using another BANN to yield an overall probability of correspondence. Receiver operating characteristic (ROC) analysis was used to evaluate the performance of the individual features and the selected feature subset in the task of distinguishing corresponding pairs from noncorresponding pairs. Using a FFDM database with 123 corresponding image pairs and 82 noncorresponding pairs, the distance feature yielded an area under the ROC curve (AUC) of 0.81±0.02 with leave-one-out (by physical lesion) evaluation, and the feature metric subset, which included distance, gradient texture, and ROI-based correlation, yielded an AUC of 0.87±0.02. The improvement by using multiple feature metrics was statistically significant compared to single feature performance. PMID:19175108

  18. An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems

    DTIC Science & Technology

    1991-12-01

    neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input

  19. Particle Swarm Optimization approach to defect detection in armour ceramics.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2017-03-01

    In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function. Copyright © 2016. Published by Elsevier B.V.

  20. Determination of artificial sweeteners in sewage sludge samples using pressurised liquid extraction and liquid chromatography-tandem mass spectrometry.

    PubMed

    Ordoñez, Edgar Y; Quintana, José Benito; Rodil, Rosario; Cela, Rafael

    2013-12-13

    An analytical method for the determination of six artificial sweeteners in sewage sludge has been developed. The procedure is based on pressurised liquid extraction (PLE) with water followed by solid-phase extraction (SPE) and subsequent liquid chromatography-tandem mass spectrometry analysis. After optimisation of the different PLE parameters, extraction with aqueous 500mM formate buffer (pH 3.5) at 80°C during a single static cycle of 21min proved to be best conditions. After a subsequent SPE, quantification limits, referred to dry weight (dw) of sewage sludge, ranged from 0.3ng/g for acesulfame (ACE) to 16ng/g for saccharin (SAC) and neohespiridine dihydrochalcone. The trueness, expressed as recovery, ranged between 72% and 105% and the precision, expressed as relative standard deviation, was lower than 16%. Moreover, the method proved its linearity up to the 2μg/g range. Finally, the described method was applied to the determination of the artificial sweeteners in primary and secondary sewage sludge from urban wastewater treatment plants. Four of the six studied artificial sweeteners (ACE, cyclamate, SAC and sucralose) were found in the samples at concentrations ranging from 17 to 628ng/g dw. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Artificial Language Learning and Feature-Based Generalization

    ERIC Educational Resources Information Center

    Finley, Sara; Badecker, William

    2009-01-01

    Abstract representations such as subsegmental phonological features play such a vital role in explanations of phonological processes that many assume that these representations play an equally prominent role in the learning process. This assumption is tested in three artificial grammar experiments involving a mini language with morpho-phonological…

  2. Electrokinetic migration across artificial liquid membranes Tuning the membrane chemistry to different types of drug substances.

    PubMed

    Gjelstad, Astrid; Rasmussen, Knut Einar; Pedersen-Bjergaard, Stig

    2006-08-18

    Twenty different basic drugs were electrokinetically extracted across a thin artificial organic liquid membrane with a 300 V d.c. electrical potential difference as the driving force. From a 300 microl aqueous sample (acidified corresponding to 10mM HCl), the drugs were extracted for 5 min through a 200 microm artificial liquid membrane of a water immiscible organic solvent immobilized in the pores of a polypropylene hollow fiber, and into a 30 microl aqueous acceptor solution of 10mM HCl inside the lumen of the hollow fiber. Hydrophobic basic drugs (logP>1.7) were effectively isolated utilizing 2-nitrophenyl octyl ether (NPOE) as the artificial liquid membrane, with recoveries up to 83%. For more hydrophilic basic drugs (logP<1.0), a mixture of NPOE and 25% (w/w) di-(2-ethylhexyl) phosphate (DEHP) was required to ensure efficient extraction, resulting in recoveries up to 75%. DEHP was expected to act as an ion-pair reagent ion-pairing the protonated hydrophilic drugs at the interface between the sample and the membrane, resulting in permeation of the interface.

  3. Robotic Vision-Based Localization in an Urban Environment

    NASA Technical Reports Server (NTRS)

    Mchenry, Michael; Cheng, Yang; Matthies

    2007-01-01

    A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.

  4. Time frequency analysis for automated sleep stage identification in fullterm and preterm neonates.

    PubMed

    Fraiwan, Luay; Lweesy, Khaldon; Khasawneh, Natheer; Fraiwan, Mohammad; Wenz, Heinrich; Dickhaus, Hartmut

    2011-08-01

    This work presents a new methodology for automated sleep stage identification in neonates based on the time frequency distribution of single electroencephalogram (EEG) recording and artificial neural networks (ANN). Wigner-Ville distribution (WVD), Hilbert-Hough spectrum (HHS) and continuous wavelet transform (CWT) time frequency distributions were used to represent the EEG signal from which features were extracted using time frequency entropy. The classification of features was done using feed forward back-propagation ANN. The system was trained and tested using data taken from neonates of post-conceptual age of 40 weeks for both preterm (14 recordings) and fullterm (15 recordings). The identification of sleep stages was successfully implemented and the classification based on the WVD outperformed the approaches based on CWT and HHS. The accuracy and kappa coefficient were found to be 0.84 and 0.65 respectively for the fullterm neonates' recordings and 0.74 and 0.50 respectively for preterm neonates' recordings.

  5. Image Reconstruction is a New Frontier of Machine Learning.

    PubMed

    Wang, Ge; Ye, Jong Chu; Mueller, Klaus; Fessler, Jeffrey A

    2018-06-01

    Over past several years, machine learning, or more generally artificial intelligence, has generated overwhelming research interest and attracted unprecedented public attention. As tomographic imaging researchers, we share the excitement from our imaging perspective [item 1) in the Appendix], and organized this special issue dedicated to the theme of "Machine learning for image reconstruction." This special issue is a sister issue of the special issue published in May 2016 of this journal with the theme "Deep learning in medical imaging" [item 2) in the Appendix]. While the previous special issue targeted medical image processing/analysis, this special issue focuses on data-driven tomographic reconstruction. These two special issues are highly complementary, since image reconstruction and image analysis are two of the main pillars for medical imaging. Together we cover the whole workflow of medical imaging: from tomographic raw data/features to reconstructed images and then extracted diagnostic features/readings.

  6. Building adaptive connectionist-based controllers: review of experiments in human-robot interaction, collective robotics, and computational neuroscience

    NASA Astrophysics Data System (ADS)

    Billard, Aude

    2000-10-01

    This paper summarizes a number of experiments in biologically inspired robotics. The common feature to all experiments is the use of artificial neural networks as the building blocks for the controllers. The experiments speak in favor of using a connectionist approach for designing adaptive and flexible robot controllers, and for modeling neurological processes. I present 1) DRAMA, a novel connectionist architecture, which has general property for learning time series and extracting spatio-temporal regularities in multi-modal and highly noisy data; 2) Robota, a doll-shaped robot, which imitates and learns a proto-language; 3) an experiment in collective robotics, where a group of 4 to 15 Khepera robots learn dynamically the topography of an environment whose features change frequently; 4) an abstract, computational model of primate ability to learn by imitation; 5) a model for the control of locomotor gaits in a quadruped legged robot.

  7. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network.

    PubMed

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-04-13

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are 'traffic light ahead' or 'pedestrian crossing' indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications.

  8. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network

    PubMed Central

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-01-01

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are ‘traffic light ahead’ or ‘pedestrian crossing’ indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications. PMID:28406471

  9. Role of Artificial Intelligence Techniques (Automatic Classifiers) in Molecular Imaging Modalities in Neurodegenerative Diseases.

    PubMed

    Cascianelli, Silvia; Scialpi, Michele; Amici, Serena; Forini, Nevio; Minestrini, Matteo; Fravolini, Mario Luca; Sinzinger, Helmut; Schillaci, Orazio; Palumbo, Barbara

    2017-01-01

    Artificial Intelligence (AI) is a very active Computer Science research field aiming to develop systems that mimic human intelligence and is helpful in many human activities, including Medicine. In this review we presented some examples of the exploiting of AI techniques, in particular automatic classifiers such as Artificial Neural Network (ANN), Support Vector Machine (SVM), Classification Tree (ClT) and ensemble methods like Random Forest (RF), able to analyze findings obtained by positron emission tomography (PET) or single-photon emission tomography (SPECT) scans of patients with Neurodegenerative Diseases, in particular Alzheimer's Disease. We also focused our attention on techniques applied in order to preprocess data and reduce their dimensionality via feature selection or projection in a more representative domain (Principal Component Analysis - PCA - or Partial Least Squares - PLS - are examples of such methods); this is a crucial step while dealing with medical data, since it is necessary to compress patient information and retain only the most useful in order to discriminate subjects into normal and pathological classes. Main literature papers on the application of these techniques to classify patients with neurodegenerative disease extracting data from molecular imaging modalities are reported, showing that the increasing development of computer aided diagnosis systems is very promising to contribute to the diagnostic process.

  10. Artificial neural network-aided image analysis system for cell counting.

    PubMed

    Sjöström, P J; Frydel, B R; Wahlberg, L U

    1999-05-01

    In histological preparations containing debris and synthetic materials, it is difficult to automate cell counting using standard image analysis tools, i.e., systems that rely on boundary contours, histogram thresholding, etc. In an attempt to mimic manual cell recognition, an automated cell counter was constructed using a combination of artificial intelligence and standard image analysis methods. Artificial neural network (ANN) methods were applied on digitized microscopy fields without pre-ANN feature extraction. A three-layer feed-forward network with extensive weight sharing in the first hidden layer was employed and trained on 1,830 examples using the error back-propagation algorithm on a Power Macintosh 7300/180 desktop computer. The optimal number of hidden neurons was determined and the trained system was validated by comparison with blinded human counts. System performance at 50x and lO0x magnification was evaluated. The correlation index at 100x magnification neared person-to-person variability, while 50x magnification was not useful. The system was approximately six times faster than an experienced human. ANN-based automated cell counting in noisy histological preparations is feasible. Consistent histology and computer power are crucial for system performance. The system provides several benefits, such as speed of analysis and consistency, and frees up personnel for other tasks.

  11. Finger language recognition based on ensemble artificial neural network learning using armband EMG sensors.

    PubMed

    Kim, Seongjung; Kim, Jongman; Ahn, Soonjae; Kim, Youngho

    2018-04-18

    Deaf people use sign or finger languages for communication, but these methods of communication are very specialized. For this reason, the deaf can suffer from social inequalities and financial losses due to their communication restrictions. In this study, we developed a finger language recognition algorithm based on an ensemble artificial neural network (E-ANN) using an armband system with 8-channel electromyography (EMG) sensors. The developed algorithm was composed of signal acquisition, filtering, segmentation, feature extraction and an E-ANN based classifier that was evaluated with the Korean finger language (14 consonants, 17 vowels and 7 numbers) in 17 subjects. E-ANN was categorized according to the number of classifiers (1 to 10) and size of training data (50 to 1500). The accuracy of the E-ANN-based classifier was obtained by 5-fold cross validation and compared with an artificial neural network (ANN)-based classifier. As the number of classifiers (1 to 8) and size of training data (50 to 300) increased, the average accuracy of the E-ANN-based classifier increased and the standard deviation decreased. The optimal E-ANN was composed with eight classifiers and 300 size of training data, and the accuracy of the E-ANN was significantly higher than that of the general ANN.

  12. Artificial Life in Quantum Technologies

    NASA Astrophysics Data System (ADS)

    Alvarez-Rodriguez, Unai; Sanz, Mikel; Lamata, Lucas; Solano, Enrique

    2016-02-01

    We develop a quantum information protocol that models the biological behaviours of individuals living in a natural selection scenario. The artificially engineered evolution of the quantum living units shows the fundamental features of life in a common environment, such as self-replication, mutation, interaction of individuals, and death. We propose how to mimic these bio-inspired features in a quantum-mechanical formalism, which allows for an experimental implementation achievable with current quantum platforms. This study paves the way for the realization of artificial life and embodied evolution with quantum technologies.

  13. Artificial retina model for the retinally blind based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zeng, Yan-an; Song, Xin-qiang; Jiang, Fa-gang; Chang, Da-ding

    2007-01-01

    Artificial retina is aimed for the stimulation of remained retinal neurons in the patients with degenerated photoreceptors. Microelectrode arrays have been developed for this as a part of stimulator. Design such microelectrode arrays first requires a suitable mathematical method for human retinal information processing. In this paper, a flexible and adjustable human visual information extracting model is presented, which is based on the wavelet transform. With the flexible of wavelet transform to image information processing and the consistent to human visual information extracting, wavelet transform theory is applied to the artificial retina model for the retinally blind. The response of the model to synthetic image is shown. The simulated experiment demonstrates that the model behaves in a manner qualitatively similar to biological retinas and thus may serve as a basis for the development of an artificial retina.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, H; Lan, L; Sennett, C

    Purpose: To gain insight into the role of parenchyma stroma in the characterization of breast tumors by incorporating computerized mammographic parenchyma assessment into breast CADx in the task of distinguishing between malignant and benign lesions. Methods: This study was performed on 182 biopsy-proven breast mass lesions, including 76 benign and 106 malignant lesions. For each full-field digital mammogram (FFDM) case, our quantitative imaging analysis was performed on both the tumor and a region-of-interest (ROI) from the normal contralateral breast. The lesion characterization includes automatic lesion segmentation and feature extraction. Radiographic texture analysis (RTA) was applied on the normal ROIs tomore » assess the mammographic parenchymal patterns of these contralateral normal breasts. Classification performance of both individual computer extracted features and the output from a Bayesian artificial neural network (BANN) were evaluated with a leave-one-lesion-out method using receiver operating characteristic (ROC) analysis with area under the curve (AUC) as the figure of merit. Results: Lesion characterization included computer-extracted phenotypes of spiculation, size, shape, and margin. For parenchymal pattern characterization, five texture features were selected, including power law beta, contrast, and edge gradient. Merging of these computer-selected features using BANN classifiers yielded AUC values of 0.79 (SE=0.03) and 0.67 (SE=0.04) in the task of distinguishing between malignant and benign lesions using only tumor phenotypes and texture features from the contralateral breasts, respectively. Incorporation of tumor phenotypes with parenchyma texture features into the BANN yielded improved classification performance with an AUC value of 0.83 (SE=0.03) in the task of differentiating malignant from benign lesions. Conclusion: Combining computerized tumor and parenchyma phenotyping was found to significantly improve breast cancer diagnostic accuracy highlighting the need to consider both tumor and stroma in decision making. Funding: University of Chicago Dean Bridge Fund, NCI U24-CA143848-05, P50-CA58223 Breast SPORE program, and Breast Cancer Research Foundation. COI: MLG is a stockholder in R2 technology/Hologic and receives royalties from Hologic, GE Medical Systems, MEDIAN Technologies, Riverain Medical, Mitsubishi, and Toshiba. MLG is a cofounder and stockholder in Quantitative Insights.« less

  15. In vitro antioxidant potential of medicinal plant extracts and their activities against oral bacteria based on Brazilian folk medicine.

    PubMed

    Alviano, Wagner S; Alviano, Daniela S; Diniz, Cláudio G; Antoniolli, Angelo R; Alviano, Celuta S; Farias, Luiz M; Carvalho, Maria Auxiliadora R; Souza, Margareth M G; Bolognese, Ana Maria

    2008-06-01

    This study aims to determine antibacterial activities of Cocos nucifera (husk fiber), Ziziphus joazeiro (inner bark), Caesalpinia pyramidalis (leaves), aqueous extracts and Aristolochia cymbifera (rhizomes) alcoholic extract against Prevotella intermedia, Porphyromonas gingivalis, Fusobacterium nucleatum, Streptococcus mutans and Lactobacillus casei. The antioxidant activity and acute toxicity of these extracts were also evaluated. The plant extracts antibacterial activity was evaluated in vitro and the minimal inhibitory concentration (MIC) was determined by the broth micro-dilution assay. The bacterial killing kinetic was also evaluated for all extracts. In addition, the antibacterial effect of the extracts was tested in vitro on artificial oral biofilms. The acute toxicity of each extract was determined in according to Lorke [Lorke D. A new approach to practical acute toxicity testing. Arch Toxicol 1983;54:275-87] and the antioxidant activity was evaluated by DPPH photometric assay [Mensor LL, Menezes FS, Leitão GG, Reis AS, Santos TC, Coube CS, et al. Screening of Brazilian plants extract for antioxidant activity by the use of DPPH free radical method. Phytother Res 2001;15:127-30]. MIC and the bactericidal concentrations were identical, for each evaluated extract. However, microbes of artificial biofilms were less sensitive to the extracts than the planktonic strains. A. cymbifera extract induced the highest bactericidal effect against all tested bacteria, followed by C. nucifera, Z. joazeiro and C. pyramidalis extracts, respectively. All extracts showed good antioxidant potential, being C. nucifera and C. pyramidalis aqueous extracts the most active ones. In conclusion, all oral bacteria tested (planktonic or in artificial biofilms) were more susceptible to, and rapidly killed in presence of A. cymbifera, C. pyramidalis and C. nucifera than Z. joazeiro extracts, respectively. Thus, these extracts may be of great interest for future studies about treatment of oral diseases, considering their potent antioxidant activity and low toxicity.

  16. Phylogeny and active ingredients of artificial Ophiocordyceps lanpingensis ascomata

    NASA Astrophysics Data System (ADS)

    Chen, Zihong; Xu, Ling; Yu, Hong; Zeng, Wenbo; Dai, Yongdong; Wang, Yuanbing

    2018-04-01

    To evaluate the morphological character, phylogenesis and functional components of artificial Ophiocordyceps lanpingensis, a related species of O. sinensis. The ascomata of O. lanpingensis was induced with its asexual strain, HLANY0707 and its microscopic feature was described. Phylogenesis was analyzed with ITS-5.8S sequences of HLANY0707, its cultured stroma, and 39 relative sequences of Hirsutella and Ophiocordyceps based on the maximum likelihood tree. Six nucleosides of artificial O. lanpingensis, natural O. lanpingensis and natural O. sinensis were compared with HPLC analysis. Artificial ascomata of O. lanpingensis could be massively produced with HLANY0707 and had similar microscopic features as the nature specimens. Phylogenetic analysis showed that both the artificial and natural O. lanpingensis had closer relationship with O. sinensis, O. xuefengensis, H. uncinata and O. robertsii, the species whose massively cultured ascomata being not reported. Nucleosides of artificial O. lanpingensis were very similar to natural O. sinensis, implying a promising application prospect of artificial O. lanpingensis as an alternative to O. sinensis. It showed a promising way to develop artificial O. lanpingensis and conserve the rare and endangered species, O. sinensis.

  17. Optimization of Bioactive Ingredient Extraction from Chinese Herbal Medicine Glycyrrhiza glabra: A Comparative Study of Three Optimization Models

    PubMed Central

    Li, Xiaohong; Zhang, Yuyan

    2018-01-01

    The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra. Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra. PMID:29887907

  18. Optimization of Bioactive Ingredient Extraction from Chinese Herbal Medicine Glycyrrhiza glabra: A Comparative Study of Three Optimization Models.

    PubMed

    Yu, Li; Jin, Weifeng; Li, Xiaohong; Zhang, Yuyan

    2018-01-01

    The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra . Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra .

  19. Diagnostic methodology for incipient system disturbance based on a neural wavelet approach

    NASA Astrophysics Data System (ADS)

    Won, In-Ho

    Since incipient system disturbances are easily mixed up with other events or noise sources, the signal from the system disturbance can be neglected or identified as noise. Thus, as available knowledge and information is obtained incompletely or inexactly from the measurements; an exploration into the use of artificial intelligence (AI) tools to overcome these uncertainties and limitations was done. A methodology integrating the feature extraction efficiency of the wavelet transform with the classification capabilities of neural networks is developed for signal classification in the context of detecting incipient system disturbances. The synergistic effects of wavelets and neural networks present more strength and less weakness than either technique taken alone. A wavelet feature extractor is developed to form concise feature vectors for neural network inputs. The feature vectors are calculated from wavelet coefficients to reduce redundancy and computational expense. During this procedure, the statistical features based on the fractal concept to the wavelet coefficients play a role as crucial key in the wavelet feature extractor. To verify the proposed methodology, two applications are investigated and successfully tested. The first involves pump cavitation detection using dynamic pressure sensor. The second pertains to incipient pump cavitation detection using signals obtained from a current sensor. Also, through comparisons between three proposed feature vectors and with statistical techniques, it is shown that the variance feature extractor provides a better approach in the performed applications.

  20. Plantar fascia segmentation and thickness estimation in ultrasound images.

    PubMed

    Boussouar, Abdelhafid; Meziane, Farid; Crofts, Gillian

    2017-03-01

    Ultrasound (US) imaging offers significant potential in diagnosis of plantar fascia (PF) injury and monitoring treatment. In particular US imaging has been shown to be reliable in foot and ankle assessment and offers a real-time effective imaging technique that is able to reliably confirm structural changes, such as thickening, and identify changes in the internal echo structure associated with diseased or damaged tissue. Despite the advantages of US imaging, images are difficult to interpret during medical assessment. This is partly due to the size and position of the PF in relation to the adjacent tissues. It is therefore a requirement to devise a system that allows better and easier interpretation of PF ultrasound images during diagnosis. This study proposes an automatic segmentation approach which for the first time extracts ultrasound data to estimate size across three sections of the PF (rearfoot, midfoot and forefoot). This segmentation method uses artificial neural network module (ANN) in order to classify small overlapping patches as belonging or not-belonging to the region of interest (ROI) of the PF tissue. Features ranking and selection techniques were performed as a post-processing step for features extraction to reduce the dimension and number of the extracted features. The trained ANN classifies the image overlapping patches into PF and non-PF tissue, and then it is used to segment the desired PF region. The PF thickness was calculated using two different methods: distance transformation and area-length calculation algorithms. This new approach is capable of accurately segmenting the PF region, differentiating it from surrounding tissues and estimating its thickness. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Artificial Intelligence Assists Ultrasonic Inspection

    NASA Technical Reports Server (NTRS)

    Schaefer, Lloyd A.; Willenberg, James D.

    1992-01-01

    Subtle indications of flaws extracted from ultrasonic waveforms. Ultrasonic-inspection system uses artificial intelligence to help in identification of hidden flaws in electron-beam-welded castings. System involves application of flaw-classification logic to analysis of ultrasonic waveforms.

  2. Study on digital teeth selection and virtual teeth arrangement for complete denture.

    PubMed

    Yu, Xiaoling; Cheng, Xiaosheng; Dai, Ning; Chen, Hu; Yu, Changjiang; Sun, Yuchun

    2018-03-01

    In dentistry, the complete denture is a conventional treatment for edentulous patients. The computer-aided design and computer-aided manufacturing (CAD/CAM) has been applied on the digital complete denture which is developed rapidly. Tooth selection and arrangement is one of the most important parts in digital complete denture. In this paper, we propose a new method of personalized teeth arrangement. This paper presents a method of arranging teeth virtually for a complete denture. First, scan and extract the feature points of the 3D triangular mesh data of artificial teeth (PLY format), then establish a tooth selection system. Second, scan and mark the anatomic characteristics of the maxillary and mandibular cast surfaces, such as facial midline, the curve of the arches. With the enter information, the study calculates the common arrangement lines of artificial teeth. Third, select the preferred artificial teeth and automatically arrange them virtually in the correct position by using our own software. After that, design the gingival part of the dentures on the basic of the arranged teeth on the screen and then fabricated it by using Computerized Numerical Control (CNC) technology, Rapid Prototyping (RP) technology or 3D printer technology. Finally, select artificial teeth were embedded in wax rims. This system can choose artificial teeth reasonably and the teeth placement can meet the dentist's request to a certain extent, whereas all the operations are based on the medical principles. The study performed here involves computer sciences, medicine, and dentistry, a teeth selection system was proposed and virtual teeth arrangement was described. This study has the capacity of helping operators to select teeth, which improved the accuracy of tooth arrangement, and customized complete denture. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A multistage approach to improve performance of computer-aided detection of pulmonary embolisms depicted on CT images: preliminary investigation.

    PubMed

    Park, Sang Cheol; Chapman, Brian E; Zheng, Bin

    2011-06-01

    This study developed a computer-aided detection (CAD) scheme for pulmonary embolism (PE) detection and investigated several approaches to improve CAD performance. In the study, 20 computed tomography examinations with various lung diseases were selected, which include 44 verified PE lesions. The proposed CAD scheme consists of five basic steps: 1) lung segmentation; 2) PE candidate extraction using an intensity mask and tobogganing region growing; 3) PE candidate feature extraction; 4) false-positive (FP) reduction using an artificial neural network (ANN); and 5) a multifeature-based k-nearest neighbor for positive/negative classification. In this study, we also investigated the following additional methods to improve CAD performance: 1) grouping 2-D detected features into a single 3-D object; 2) selecting features with a genetic algorithm (GA); and 3) limiting the number of allowed suspicious lesions to be cued in one examination. The results showed that 1) CAD scheme using tobogganing, an ANN, and grouping method achieved the maximum detection sensitivity of 79.2%; 2) the maximum scoring method achieved the superior performance over other scoring fusion methods; 3) GA was able to delete "redundant" features and further improve CAD performance; and 4) limiting the maximum number of cued lesions in an examination reduced FP rate by 5.3 times. Combining these approaches, CAD scheme achieved 63.2% detection sensitivity with 18.4 FP lesions per examination. The study suggested that performance of CAD schemes for PE detection depends on many factors that include 1) optimizing the 2-D region grouping and scoring methods; 2) selecting the optimal feature set; and 3) limiting the number of allowed cueing lesions per examination.

  4. Artificial neural network modeling and optimization of ultrahigh pressure extraction of green tea polyphenols.

    PubMed

    Xi, Jun; Xue, Yujing; Xu, Yinxiang; Shen, Yuhong

    2013-11-01

    In this study, the ultrahigh pressure extraction of green tea polyphenols was modeled and optimized by a three-layer artificial neural network. A feed-forward neural network trained with an error back-propagation algorithm was used to evaluate the effects of pressure, liquid/solid ratio and ethanol concentration on the total phenolic content of green tea extracts. The neural network coupled with genetic algorithms was also used to optimize the conditions needed to obtain the highest yield of tea polyphenols. The obtained optimal architecture of artificial neural network model involved a feed-forward neural network with three input neurons, one hidden layer with eight neurons and one output layer including single neuron. The trained network gave the minimum value in the MSE of 0.03 and the maximum value in the R(2) of 0.9571, which implied a good agreement between the predicted value and the actual value, and confirmed a good generalization of the network. Based on the combination of neural network and genetic algorithms, the optimum extraction conditions for the highest yield of green tea polyphenols were determined as follows: 498.8 MPa for pressure, 20.8 mL/g for liquid/solid ratio and 53.6% for ethanol concentration. The total phenolic content of the actual measurement under the optimum predicated extraction conditions was 582.4 ± 0.63 mg/g DW, which was well matched with the predicted value (597.2mg/g DW). This suggests that the artificial neural network model described in this work is an efficient quantitative tool to predict the extraction efficiency of green tea polyphenols. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  5. A Study for the Feature Selection to Identify GIEMSA-Stained Human Chromosomes Based on Artificial Neural Network

    DTIC Science & Technology

    2001-10-25

    neural network (ANN) has been adopted for the human chromosome classification. It is important to select optimum features for training neural network...Many studies for computer-based chromosome analysis have shown that it is possible to classify chromosomes into 24 subgroups. In addition, artificial

  6. PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less

  7. Statistical Software and Artificial Intelligence: A Watershed in Applications Programming.

    ERIC Educational Resources Information Center

    Pickett, John C.

    1984-01-01

    AUTOBJ and AUTOBOX are revolutionary software programs which contain the first application of artificial intelligence to statistical procedures used in analysis of time series data. The artificial intelligence included in the programs and program features are discussed. (JN)

  8. Mobile robots exploration through cnn-based reinforcement learning.

    PubMed

    Tai, Lei; Liu, Ming

    2016-01-01

    Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.

  9. Climbing with adhesion: from bioinspiration to biounderstanding

    PubMed Central

    Cutkosky, Mark R.

    2015-01-01

    Bioinspiration is an increasingly popular design paradigm, especially as robots venture out of the laboratory and into the world. Animals are adept at coping with the variability that the world imposes. With advances in scientific tools for understanding biological structures in detail, we are increasingly able to identify design features that account for animals' robust performance. In parallel, advances in fabrication methods and materials are allowing us to engineer artificial structures with similar properties. The resulting robots become useful platforms for testing hypotheses about which principles are most important. Taking gecko-inspired climbing as an example, we show that the process of extracting principles from animals and adapting them to robots provides insights for both robotics and biology. PMID:26464786

  10. Visualization of suspicious lesions in breast MRI based on intelligent neural systems

    NASA Astrophysics Data System (ADS)

    Twellmann, Thorsten; Lange, Oliver; Nattkemper, Tim Wilhelm; Meyer-Bäse, Anke

    2006-05-01

    Intelligent medical systems based on supervised and unsupervised artificial neural networks are applied to the automatic visualization and classification of suspicious lesions in breast MRI. These systems represent an important component of future sophisticated computer-aided diagnosis systems and enable the extraction of spatial and temporal features of dynamic MRI data stemming from patients with confirmed lesion diagnosis. By taking into account the heterogenity of the cancerous tissue, these techniques reveal the malignant, benign and normal kinetic signals and and provide a regional subclassification of pathological breast tissue. Intelligent medical systems are expected to have substantial implications in healthcare politics by contributing to the diagnosis of indeterminate breast lesions by non-invasive imaging.

  11. [Medical imaging in tumor precision medicine: opportunities and challenges].

    PubMed

    Xu, Jingjing; Tan, Yanbin; Zhang, Minming

    2017-05-25

    Tumor precision medicine is an emerging approach for tumor diagnosis, treatment and prevention, which takes account of individual variability of environment, lifestyle and genetic information. Tumor precision medicine is built up on the medical imaging innovations developed during the past decades, including the new hardware, new imaging agents, standardized protocols, image analysis and multimodal imaging fusion technology. Also the development of automated and reproducible analysis algorithm has extracted large amount of information from image-based features. With the continuous development and mining of tumor clinical and imaging databases, the radiogenomics, radiomics and artificial intelligence have been flourishing. Therefore, these new technological advances bring new opportunities and challenges to the application of imaging in tumor precision medicine.

  12. Stability Test of Partially Purified Bromelain from Pineapple (Ananas comosus (L.) Merr) Core Extract in Artificial Stomach Fluid

    NASA Astrophysics Data System (ADS)

    Setiasih, S.; Adimas, A. Ch. D.; Dzikria, V.; Hudiyono, S.

    2018-01-01

    This study aimed to isolate and purify bromelain from pineapple core (Ananascomosus (L.) Merr) accompanied by a stability test of its enzyme activity in artificial gastric juice. Purification steps start with fractionation by a precipitation method were carried out stepwise using several concentration of ammonium sulfate salt, followed by dialysis prosess and ion exchange chromatography on DEAE-cellulose column. Each step of purification produced an increasing specific activity in enzyme fraction, starting with crude extract, respectively: 0.276 U/mg; 14.591 U/mg; and 16.05 U/mg. Bromelain fraction with the highest level of purity was obtained in 50-80% ammonium sulphate fraction after dialyzed in the amount of 58.15 times compared to the crude extract. Further purification of the enzyme by DEAE-cellulose column produced bromelain which had a purity level 160-fold compared to crude enzyme. The result of bromelain stability test in artificial stomach juice by milk clotting units assay bromelain fraction have proteolytic activity in clotting milk substrate. Exposing bromelain fraction in artificial stomach juice which gave the highest core bromelain proteolytic activity was achieved at estimated volume of 0.4-0.5 mL. Exposure in a period of reaction time to artificial stomach juice that contained pepsin showed relatively stable proteolytic activity in the first 4 hours.

  13. Artificial Life in Quantum Technologies

    PubMed Central

    Alvarez-Rodriguez, Unai; Sanz, Mikel; Lamata, Lucas; Solano, Enrique

    2016-01-01

    We develop a quantum information protocol that models the biological behaviours of individuals living in a natural selection scenario. The artificially engineered evolution of the quantum living units shows the fundamental features of life in a common environment, such as self-replication, mutation, interaction of individuals, and death. We propose how to mimic these bio-inspired features in a quantum-mechanical formalism, which allows for an experimental implementation achievable with current quantum platforms. This study paves the way for the realization of artificial life and embodied evolution with quantum technologies. PMID:26853918

  14. Supervised neural network classification of pre-sliced cooked pork ham images using quaternionic singular values.

    PubMed

    Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul

    2010-03-01

    The quaternionic singular value decomposition is a technique to decompose a quaternion matrix (representation of a colour image) into quaternion singular vector and singular value component matrices exposing useful properties. The objective of this study was to use a small portion of uncorrelated singular values, as robust features for the classification of sliced pork ham images, using a supervised artificial neural network classifier. Images were acquired from four qualities of sliced cooked pork ham typically consumed in Ireland (90 slices per quality), having similar appearances. Mahalanobis distances and Pearson product moment correlations were used for feature selection. Six highly discriminating features were used as input to train the neural network. An adaptive feedforward multilayer perceptron classifier was employed to obtain a suitable mapping from the input dataset. The overall correct classification performance for the training, validation and test set were 90.3%, 94.4%, and 86.1%, respectively. The results confirm that the classification performance was satisfactory. Extracting the most informative features led to the recognition of a set of different but visually quite similar textural patterns based on quaternionic singular values. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. Luminance sticker based facial expression recognition using discrete wavelet transform for physically disabled persons.

    PubMed

    Nagarajan, R; Hariharan, M; Satiyan, M

    2012-08-01

    Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.

  16. Optical coherence tomography used for internal biometrics

    NASA Astrophysics Data System (ADS)

    Chang, Shoude; Sherif, Sherif; Mao, Youxin; Flueraru, Costel

    2007-06-01

    Traditional biometric technologies used for security and person identification essentially deal with fingerprints, hand geometry and face images. However, because all these technologies use external features of human body, they can be easily fooled and tampered with by distorting, modifying or counterfeiting these features. Nowadays, internal biometrics which detects the internal ID features of an object is becoming increasingly important. Being capable of exploring under-skin structure, optical coherence tomography (OCT) system can be used as a powerful tool for internal biometrics. We have applied fiber-optic and full-field OCT systems to detect the multiple-layer 2D images and 3D profile of the fingerprints, which eventually result in a higher discrimination than the traditional 2D recognition methods. More importantly, the OCT based fingerprint recognition has the ability to easily distinguish artificial fingerprint dummies by analyzing the extracted layered surfaces. Experiments show that our OCT systems successfully detected the dummy, which was made of plasticene and was used to bypass the commercially available fingerprint scanning system with a false accept rate (FAR) of 100%.

  17. An Interpretable Machine Learning Model for Accurate Prediction of Sepsis in the ICU.

    PubMed

    Nemati, Shamim; Holder, Andre; Razmi, Fereshteh; Stanley, Matthew D; Clifford, Gari D; Buchman, Timothy G

    2018-04-01

    Sepsis is among the leading causes of morbidity, mortality, and cost overruns in critically ill patients. Early intervention with antibiotics improves survival in septic patients. However, no clinically validated system exists for real-time prediction of sepsis onset. We aimed to develop and validate an Artificial Intelligence Sepsis Expert algorithm for early prediction of sepsis. Observational cohort study. Academic medical center from January 2013 to December 2015. Over 31,000 admissions to the ICUs at two Emory University hospitals (development cohort), in addition to over 52,000 ICU patients from the publicly available Medical Information Mart for Intensive Care-III ICU database (validation cohort). Patients who met the Third International Consensus Definitions for Sepsis (Sepsis-3) prior to or within 4 hours of their ICU admission were excluded, resulting in roughly 27,000 and 42,000 patients within our development and validation cohorts, respectively. None. High-resolution vital signs time series and electronic medical record data were extracted. A set of 65 features (variables) were calculated on hourly basis and passed to the Artificial Intelligence Sepsis Expert algorithm to predict onset of sepsis in the proceeding T hours (where T = 12, 8, 6, or 4). Artificial Intelligence Sepsis Expert was used to predict onset of sepsis in the proceeding T hours and to produce a list of the most significant contributing factors. For the 12-, 8-, 6-, and 4-hour ahead prediction of sepsis, Artificial Intelligence Sepsis Expert achieved area under the receiver operating characteristic in the range of 0.83-0.85. Performance of the Artificial Intelligence Sepsis Expert on the development and validation cohorts was indistinguishable. Using data available in the ICU in real-time, Artificial Intelligence Sepsis Expert can accurately predict the onset of sepsis in an ICU patient 4-12 hours prior to clinical recognition. A prospective study is necessary to determine the clinical utility of the proposed sepsis prediction model.

  18. A New Experiment on Bengali Character Recognition

    NASA Astrophysics Data System (ADS)

    Barman, Sumana; Bhattacharyya, Debnath; Jeon, Seung-Whan; Kim, Tai-Hoon; Kim, Haeng-Kon

    This paper presents a method to use View based approach in Bangla Optical Character Recognition (OCR) system providing reduced data set to the ANN classification engine rather than the traditional OCR methods. It describes how Bangla characters are processed, trained and then recognized with the use of a Backpropagation Artificial neural network. This is the first published account of using a segmentation-free optical character recognition system for Bangla using a view based approach. The methodology presented here assumes that the OCR pre-processor has presented the input images to the classification engine described here. The size and the font face used to render the characters are also significant in both training and classification. The images are first converted into greyscale and then to binary images; these images are then scaled to a fit a pre-determined area with a fixed but significant number of pixels. The feature vectors are then formed extracting the characteristics points, which in this case is simply a series of 0s and 1s of fixed length. Finally, an artificial neural network is chosen for the training and classification process.

  19. Automatic voice recognition using traditional and artificial neural network approaches

    NASA Technical Reports Server (NTRS)

    Botros, Nazeih M.

    1989-01-01

    The main objective of this research is to develop an algorithm for isolated-word recognition. This research is focused on digital signal analysis rather than linguistic analysis of speech. Features extraction is carried out by applying a Linear Predictive Coding (LPC) algorithm with order of 10. Continuous-word and speaker independent recognition will be considered in future study after accomplishing this isolated word research. To examine the similarity between the reference and the training sets, two approaches are explored. The first is implementing traditional pattern recognition techniques where a dynamic time warping algorithm is applied to align the two sets and calculate the probability of matching by measuring the Euclidean distance between the two sets. The second is implementing a backpropagation artificial neural net model with three layers as the pattern classifier. The adaptation rule implemented in this network is the generalized least mean square (LMS) rule. The first approach has been accomplished. A vocabulary of 50 words was selected and tested. The accuracy of the algorithm was found to be around 85 percent. The second approach is in progress at the present time.

  20. Pattern Recognition Using Artificial Neural Network: A Review

    NASA Astrophysics Data System (ADS)

    Kim, Tai-Hoon

    Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, artificial neural network techniques theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system using ANN and identify research topics and applications which are at the forefront of this exciting and challenging field.

  1. Heart murmur detection based on wavelet transformation and a synergy between artificial neural network and modified neighbor annealing methods.

    PubMed

    Eslamizadeh, Gholamhossein; Barati, Ramin

    2017-05-01

    Early recognition of heart disease plays a vital role in saving lives. Heart murmurs are one of the common heart problems. In this study, Artificial Neural Network (ANN) is trained with Modified Neighbor Annealing (MNA) to classify heart cycles into normal and murmur classes. Heart cycles are separated from heart sounds using wavelet transformer. The network inputs are features extracted from individual heart cycles, and two classification outputs. Classification accuracy of the proposed model is compared with five multilayer perceptron trained with Levenberg-Marquardt, Extreme-learning-machine, back-propagation, simulated-annealing, and neighbor-annealing algorithms. It is also compared with a Self-Organizing Map (SOM) ANN. The proposed model is trained and tested using real heart sounds available in the Pascal database to show the applicability of the proposed scheme. Also, a device to record real heart sounds has been developed and used for comparison purposes too. Based on the results of this study, MNA can be used to produce considerable results as a heart cycle classifier. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Drill wear monitoring in cortical bone drilling.

    PubMed

    Staroveski, Tomislav; Brezak, Danko; Udiljak, Toma

    2015-06-01

    Medical drills are subject to intensive wear due to mechanical factors which occur during the bone drilling process, and potential thermal and chemical factors related to the sterilisation process. Intensive wear increases friction between the drill and the surrounding bone tissue, resulting in higher drilling temperatures and cutting forces. Therefore, the goal of this experimental research was to develop a drill wear classification model based on multi-sensor approach and artificial neural network algorithm. A required set of tool wear features were extracted from the following three types of signals: cutting forces, servomotor drive currents and acoustic emission. Their capacity to classify precisely one of three predefined drill wear levels has been established using a pattern recognition type of the Radial Basis Function Neural Network algorithm. Experiments were performed on a custom-made test bed system using fresh bovine bones and standard medical drills. Results have shown high classification success rate, together with the model robustness and insensitivity to variations of bone mechanical properties. Features extracted from acoustic emission and servomotor drive signals achieved the highest precision in drill wear level classification (92.8%), thus indicating their potential in the design of a new type of medical drilling machine with process monitoring capabilities. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Intelligent MRTD testing for thermal imaging system using ANN

    NASA Astrophysics Data System (ADS)

    Sun, Junyue; Ma, Dongmei

    2006-01-01

    The Minimum Resolvable Temperature Difference (MRTD) is the most widely accepted figure for describing the performance of a thermal imaging system. Many models have been proposed to predict it. The MRTD testing is a psychophysical task, for which biases are unavoidable. It requires laboratory conditions such as normal air condition and a constant temperature. It also needs expensive measuring equipments and takes a considerable period of time. Especially when measuring imagers of the same type, the test is time consuming. So an automated and intelligent measurement method should be discussed. This paper adopts the concept of automated MRTD testing using boundary contour system and fuzzy ARTMAP, but uses different methods. It describes an Automated MRTD Testing procedure basing on Back-Propagation Network. Firstly, we use frame grabber to capture the 4-bar target image data. Then according to image gray scale, we segment the image to get 4-bar place and extract feature vector representing the image characteristic and human detection ability. These feature sets, along with known target visibility, are used to train the ANN (Artificial Neural Networks). Actually it is a nonlinear classification (of input dimensions) of the image series using ANN. Our task is to justify if image is resolvable or uncertainty. Then the trained ANN will emulate observer performance in determining MRTD. This method can reduce the uncertainties between observers and long time dependent factors by standardization. This paper will introduce the feature extraction algorithm, demonstrate the feasibility of the whole process and give the accuracy of MRTD measurement.

  4. A novel approach for dimension reduction of microarray.

    PubMed

    Aziz, Rabia; Verma, C K; Srivastava, Namita

    2017-12-01

    This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Artificial neural networks for acoustic target recognition

    NASA Astrophysics Data System (ADS)

    Robertson, James A.; Mossing, John C.; Weber, Bruce A.

    1995-04-01

    Acoustic sensors can be used to detect, track and identify non-line-of-sight targets passively. Attempts to alter acoustic emissions often result in an undesirable performance degradation. This research project investigates the use of neural networks for differentiating between features extracted from the acoustic signatures of sources. Acoustic data were filtered and digitized using a commercially available analog-digital convertor. The digital data was transformed to the frequency domain for additional processing using the FFT. Narrowband peak detection algorithms were incorporated to select peaks above a user defined SNR. These peaks were then used to generate a set of robust features which relate specifically to target components in varying background conditions. The features were then used as input into a backpropagation neural network. A K-means unsupervised clustering algorithm was used to determine the natural clustering of the observations. Comparisons between a feature set consisting of the normalized amplitudes of the first 250 frequency bins of the power spectrum and a set of 11 harmonically related features were made. Initial results indicate that even though some different target types had a tendency to group in the same clusters, the neural network was able to differentiate the targets. Successful identification of acoustic sources under varying operational conditions with high confidence levels was achieved.

  6. Health Risk Assessment of Lead Ingestion Exposure by Particle Sizes in Crumb Rubber on Artificial Turf Considering Bioavailability

    PubMed Central

    Kim, Sunduk; Yang, Ji-Yeon; Kim, Ho-Hyun; Yeo, In-Young; Shin, Dong-Chun

    2012-01-01

    Objectives The purpose of this study was to assess the risk of ingestion exposure of lead by particle sizes of crumb rubber in artificial turf filling material with consideration of bioavailability. Methods This study estimated the ingestion exposure by particle sizes (more than 250 um or less than 250 um) focusing on recyclable ethylene propylene diene monomer crumb rubber being used as artificial turf filling. Analysis on crumb rubber was conducted using body ingestion exposure estimate method in which total content test method, acid extraction method and digestion extraction method are reflected. Bioavailability which is a calibrating factor was reflected in ingestion exposure estimate method and applied in exposure assessment and risk assessment. Two methods using acid extraction and digestion extraction concentration were compared and evaluated. Results As a result of the ingestion exposure of crumb rubber material, the average lead exposure amount to the digestion extraction result among crumb rubber was calculated to be 1.56×10-4 mg/kg-day for low grade elementary school students and 4.87×10-5 mg/kg-day for middle and high school students in 250 um or less particle size, and that to the acid extraction result was higher than the digestion extraction result. Results of digestion extraction and acid extraction showed that the hazard quotient was estimated by about over 2 times more in particle size of lower than 250 um than in higher than 250 um. There was a case of an elementary school student in which the hazard quotient exceeded 0.1. Conclusions Results of this study confirm that the exposure of lead ingestion and risk level increases as the particle size of crumb rubber gets smaller. PMID:22355803

  7. Knowledge Representation Of CT Scans Of The Head

    NASA Astrophysics Data System (ADS)

    Ackerman, Laurens V.; Burke, M. W.; Rada, Roy

    1984-06-01

    We have been investigating diagnostic knowledge models which assist in the automatic classification of medical images by combining information extracted from each image with knowledge specific to that class of images. In a more general sense we are trying to integrate verbal and pictorial descriptions of disease via representations of knowledge, study automatic hypothesis generation as related to clinical medicine, evolve new mathematical image measures while integrating them into the total diagnostic process, and investigate ways to augment the knowledge of the physician. Specifically, we have constructed an artificial intelligence knowledge model using the technique of a production system blending pictorial and verbal knowledge about the respective CT scan and patient history. It is an attempt to tie together different sources of knowledge representation, picture feature extraction and hypothesis generation. Our knowledge reasoning and representation system (KRRS) works with data at the conscious reasoning level of the practicing physician while at the visual perceptional level we are building another production system, the picture parameter extractor (PPE). This paper describes KRRS and its relationship to PPE.

  8. Rapid Phenotyping of Root Systems of Brachypodium Plants Using X-ray Computed Tomography: a Comparative Study of Soil Types and Segmentation Tools

    NASA Astrophysics Data System (ADS)

    Varga, T.; McKinney, A. L.; Bingham, E.; Handakumbura, P. P.; Jansson, C.

    2017-12-01

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as in processes with important implications to farming and thus human food supply. X-ray computed tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. Selected Brachypodium distachyon phenotypes were grown in both natural and artificial soil mixes. The specimens were imaged by XCT, and the root architectures were extracted from the data using three different software-based methods; RooTrak, ImageJ-based WEKA segmentation, and the segmentation feature in VG Studio MAX. The 3D root image was successfully segmented at 30 µm resolution by all three methods. In this presentation, ease of segmentation and the accuracy of the extracted quantitative information (root volume and surface area) will be compared between soil types and segmentation methods. The best route to easy and accurate segmentation and root analysis will be highlighted.

  9. A novel feature extraction approach for microarray data based on multi-algorithm fusion

    PubMed Central

    Jiang, Zhu; Xu, Rong

    2015-01-01

    Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions. PMID:25780277

  10. A novel feature extraction approach for microarray data based on multi-algorithm fusion.

    PubMed

    Jiang, Zhu; Xu, Rong

    2015-01-01

    Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.

  11. In vitro remineralization effects of grape seed extract on artificial root caries.

    PubMed

    Xie, Qian; Bedran-Russo, Ana Karina; Wu, Christine D

    2008-11-01

    Grape seed extract (GSE) contains proanthocyanidins (PA), which has been reported to strengthen collagen-based tissues by increasing collagen cross-links. We used an in vitro pH-cycling model to evaluate the effect of GSE on the remineralization of artificial root caries. Sound human teeth fragments obtained from the cervical portion of the root were stored in a demineralization solution for 96 h at 37 degrees C to induce artificial root caries lesions. The fragments were then divided into three treatment groups including: 6.5% GSE, 1,000 ppm fluoride (NaF), and a control (no treatment). The demineralized samples were pH-cycled through treatment solutions, acidic buffer and neutral buffer for 8 days at 6 cycles per day. The samples were subsequently evaluated using a microhardness tester, polarized light microscopy (PLM) and confocal laser scanning microscopy (CLSM). Data were analyzed using ANOVA and Fisher's tests (p<0.05). GSE and fluoride significantly increased the microhardness of the lesions (p<0.05) when compared to a control group. PLM data revealed a significantly thicker mineral precipitation band on the surface layer of the GSE-treated lesions when compared to the other groups (p>0.05), which was confirmed by CLSM. We concluded that grape seed extract positively affects the demineralization and/or remineralization processes of artificial root caries lesions, most likely through a different mechanism than that of fluoride. Grape seed extract may be a promising natural agent for non-invasive root caries therapy.

  12. Practical and theoretical characterization of Inga laurina Kunitz inhibitor on the control of Homalinotus coriaceus.

    PubMed

    Macedo, Maria Lígia Rodrigues; Freire, Maria das Graças Machado; Franco, Octávio Luiz; Migliolo, Ludovico; de Oliveira, Caio Fernando Ramalho

    2011-02-01

    Digestive endoprotease activities of the coconut palm weevil, Homalinotus coriaceus (Coleoptera: Curculionidae), were characterized based on the ability of gut extracts to hydrolyze specific synthetic substrates, optimal pH, and hydrolysis sensitivity to protease inhibitors. Trypsin-like proteinases were major enzymes for H. coriaceus, with minor activity by chymotrypsin proteinases. More importantly, gut proteinases of H. coriaceus were inhibited by trypsin inhibitor from Inga laurina seeds. In addition, a serine proteinase inhibitor from I. laurina seeds demonstrated significant reduction of growth of H. coriaceus larvae after feeding on inhibitor incorporated artificial diets. Dietary utilization experiments show that 0.05% I. laurina trypsin inhibitor, incorporated into an artificial diet, decreases the consumption rate and fecal production of H. coriaceus larvae. Dietary utilization experiments show that 0.05% I. laurina trypsin inhibitor, incorporated into an artificial diet, decreases the consumption rate and fecal production of H. coriaceus larvae. We have constructed a three-dimensional model of the trypsin inhibitor complexed with trypsin. The model was built based on its comparative homology with soybean trypsin inhibitor. Trypsin inhibitor of I. laurina shows structural features characteristic of the Kunitz type trypsin inhibitor. In summary, these findings contribute to the development of biotechnological tools such as transgenic plants with enhanced resistance to insect pests. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Third-Order Spectral Techniques for the Diagnosis of Motor Bearing Condition Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Yang, D.-M.; Stronach, A. F.; MacConnell, P.; Penman, J.

    2002-03-01

    This paper addresses the development of a novel condition monitoring procedure for rolling element bearings which involves a combination of signal processing, signal analysis and artificial intelligence methods. Seven approaches based on power spectrum, bispectral and bicoherence vibration analyses are investigated as signal pre-processing techniques for application in the diagnosis of a number of induction motor rolling element bearing conditions. The bearing conditions considered are a normal bearing and bearings with cage and inner and outer race faults. The vibration analysis methods investigated are based on the power spectrum, the bispectrum, the bicoherence, the bispectrum diagonal slice, the bicoherence diagonal slice, the summed bispectrum and the summed bicoherence. Selected features are extracted from the vibration signatures so obtained and these are used as inputs to an artificial neural network trained to identify the bearing conditions. Quadratic phase coupling (QPC), examined using the magnitude of bispectrum and bicoherence and biphase, is shown to be absent from the bearing system and it is therefore concluded that the structure of the bearing vibration signatures results from inter-modulation effects. In order to test the proposed procedure, experimental data from a bearing test rig are used to develop an example diagnostic system. Results show that the bearing conditions examined can be diagnosed with a high success rate, particularly when using the summed bispectrum signatures.

  14. Pressurized liquid extraction-gas chromatography-mass spectrometry for confirming the photo-induced generation of dioxin-like derivatives and other cosmetic preservative photoproducts on artificial skin.

    PubMed

    Alvarez-Rivera, Gerardo; Llompart, Maria; Garcia-Jares, Carmen; Lores, Marta

    2016-04-01

    The stability and photochemical transformations of cosmetic preservatives in topical applications exposed to UV-light is a serious but poorly understood problem. In this study, a high throughput extraction and selective method based on pressurized liquid extraction (PLE) coupled to gas chromatography-mass spectrometry (GC-MS) was validated and applied to investigate the photochemical transformation of the antioxidant butylated hydroxytoluene (BHT), as well as the antimicrobials triclosan (TCS) and phenyl benzoate (PhBz) in an artificial skin model. Two sets of photodegradation experiments were performed: (i) UV-Irradiation (8W, 254nm) of artificial skin directly spiked with the target preservatives, and (ii) UV-irradiation of artificial skin after the application of a cosmetic cream fortified with the target compounds. After irradiation, PLE was used to isolate the target preservatives and their transformation products. The follow-up of the photodegradation kinetics of the parent preservatives, the identification of the arising by-products, and the monitorization of their kinetic profiles was performed by GC-MS. The photochemical transformation of triclosan into 2,8-dichloro-dibenzo-p-dioxin (2,8-DCDD) and other dioxin-like photoproducts has been confirmed in this work. Furthermore, seven BHT photoproducts, and three benzophenones as PhBz by-products, have been also identified. These findings reveal the first evidences of cosmetic ingredients phototransformation into unwanted photoproducts on an artificial skin model. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. A comparative study of the antacid effect of raw spinach juice and spinach extract in an artificial stomach model.

    PubMed

    Panda, Vandana Sanjeev; Shinde, Priyanka Mangesh

    2016-12-01

    BackgroundSpinacia oleracea known as spinach is a green-leafy vegetable consumed by people across the globe. It is reported to possess potent medicinal properties by virtue of its numerous antioxidant phytoconstituents, together termed as the natural antioxidant mixture (NAO). The present study compares the antacid effect of raw spinach juice with an antioxidant-rich methanolic extract of spinach (NAOE) in an artificial stomach model. MethodsThe pH of NAOE at various concentrations (50, 100 and 200 mg/mL) and its neutralizing effect on artificial gastric acid was determined and compared with that of raw spinach juice, water, the active control sodium bicarbonate (SB) and a marketed antacid preparation ENO. A modified model of Vatier's artificial stomach was used to determine the duration of consistent neutralization of artificial gastric acid for the test compounds. The neutralizing capacity of test compounds was determined in vitro using the classical titration method of Fordtran. Results NAOE (50, 100 and 200 mg/mL), spinach juice, SB and ENO showed significantly better acid-neutralizing effect, consistent duration of neutralization and higher antacid capacity when compared with water. Highest antacid activity was demonstrated by ENO and SB followed by spinach juice and NAOE200. Spinach juice exhibited an effect comparable to NAOE (200 mg/mL). ConclusionsThus, it may be concluded that spinach displays significant antacid activity be it in the raw juice form or as an extract in methanol.

  16. Variability and robustness of scatterers in HRR/ISAR ground target data and its influence on the ATR performance

    NASA Astrophysics Data System (ADS)

    Schumacher, R.; Schimpf, H.; Schiller, J.

    2011-06-01

    The most challenging problem of Automatic Target Recognition (ATR) is the extraction of robust and independent target features which describe the target unambiguously. These features have to be robust and invariant in different senses: in time, between aspect views (azimuth and elevation angle), between target motion (translation and rotation) and between different target variants. Especially for ground moving targets in military applications an irregular target motion is typical, so that a strong variation of the backscattered radar signal with azimuth and elevation angle makes the extraction of stable and robust features most difficult. For ATR based on High Range Resolution (HRR) profiles and / or Inverse Synthetic Aperture Radar (ISAR) images it is crucial that the reference dataset consists of stable and robust features, which, among others, will depend on the target aspect and depression angle amongst others. Here it is important to find an adequate data grid for an efficient data coverage in the reference dataset for ATR. In this paper the variability of the backscattered radar signals of target scattering centers is analyzed for different HRR profiles and ISAR images from measured turntable datasets of ground targets under controlled conditions. Especially the dependency of the features on the elevation angle is analyzed regarding to the ATR of large strip SAR data with a large range of depression angles by using available (I)SAR datasets as reference. In this work the robustness of these scattering centers is analyzed by extracting their amplitude, phase and position. Therefore turntable measurements under controlled conditions were performed targeting an artificial military reference object called STANDCAM. Measures referring to variability, similarity, robustness and separability regarding the scattering centers are defined. The dependency of the scattering behaviour with respect to azimuth and elevation variations is analyzed. Additionally generic types of features (geometrical, statistical), which can be derived especially from (I)SAR images, are applied to the ATR-task. Therefore subsequently the dependence of individual feature values as well as the feature statistics on aspect (i.e. azimuth and elevation) are presented. The Kolmogorov-Smirnov distance will be used to show how the feature statistics is influenced by varying elevation angles. Finally, confusion matrices are computed between the STANDCAM target at all eleven elevation angles. This helps to assess the robustness of ATR performance under the influence of aspect angle deviations between training set and test set.

  17. Automated Depression Analysis Using Convolutional Neural Networks from Speech.

    PubMed

    He, Lang; Cao, Cui

    2018-05-28

    To help clinicians to efficiently diagnose the severity of a person's depression, the affective computing community and the artificial intelligence field have shown a growing interest in designing automated systems. The speech features have useful information for the diagnosis of depression. However, manually designing and domain knowledge are still important for the selection of the feature, which makes the process labor consuming and subjective. In recent years, deep-learned features based on neural networks have shown superior performance to hand-crafted features in various areas. In this paper, to overcome the difficulties mentioned above, we propose a combination of hand-crafted and deep-learned features which can effectively measure the severity of depression from speech. In the proposed method, Deep Convolutional Neural Networks (DCNN) are firstly built to learn deep-learned features from spectrograms and raw speech waveforms. Then we manually extract the state-of-the-art texture descriptors named median robust extended local binary patterns (MRELBP) from spectrograms. To capture the complementary information within the hand-crafted features and deep-learned features, we propose joint fine-tuning layers to combine the raw and spectrogram DCNN to boost the depression recognition performance. Moreover, to address the problems with small samples, a data augmentation method was proposed. Experiments conducted on AVEC2013 and AVEC2014 depression databases show that our approach is robust and effective for the diagnosis of depression when compared to state-of-the-art audio-based methods. Copyright © 2018. Published by Elsevier Inc.

  18. A method for velocity signal reconstruction of AFDISAR/PDV based on crazy-climber algorithm

    NASA Astrophysics Data System (ADS)

    Peng, Ying-cheng; Guo, Xian; Xing, Yuan-ding; Chen, Rong; Li, Yan-jie; Bai, Ting

    2017-10-01

    The resolution of Continuous wavelet transformation (CWT) is different when the frequency is different. For this property, the time-frequency signal of coherent signal obtained by All Fiber Displacement Interferometer System for Any Reflector (AFDISAR) is extracted. Crazy-climber Algorithm is adopted to extract wavelet ridge while Velocity history curve of the measuring object is obtained. Numerical simulation is carried out. The reconstruction signal is completely consistent with the original signal, which verifies the accuracy of the algorithm. Vibration of loudspeaker and free end of Hopkinson incident bar under impact loading are measured by AFDISAR, and the measured coherent signals are processed. Velocity signals of loudspeaker and free end of Hopkinson incident bar are reconstructed respectively. Comparing with the theoretical calculation, the particle vibration arrival time difference error of the free end of Hopkinson incident bar is 2μs. It is indicated from the results that the algorithm is of high accuracy, and is of high adaptability to signals of different time-frequency feature. The algorithm overcomes the limitation of modulating the time window artificially according to the signal variation when adopting STFT, and is suitable for extracting signal measured by AFDISAR.

  19. Advanced-technology space station study: Summary of systems and pacing technologies

    NASA Technical Reports Server (NTRS)

    Butterfield, A. J.; Garn, P. A.; King, C. B.; Queijo, M. J.

    1990-01-01

    The principal system features defined for the Advanced Technology Space Station are summarized and the 21 pacing technologies identified during the course of the study are described. The descriptions of system configurations were extracted from four previous study reports. The technological areas focus on those systems particular to all large spacecraft which generate artificial gravity by rotation. The summary includes a listing of the functions, crew requirements and electrical power demand that led to the studied configuration. The pacing technologies include the benefits of advanced materials, in-orbit assembly requirements, stationkeeping, evaluations of electrical power generation alternates, and life support systems. The descriptions of systems show the potential for synergies and identifies the beneficial interactions that can result from technological advances.

  20. Information-driven trade and price-volume relationship in artificial stock markets

    NASA Astrophysics Data System (ADS)

    Liu, Xinghua; Liu, Xin; Liang, Xiaobei

    2015-07-01

    The positive relation between stock price changes and trading volume (price-volume relationship) as a stylized fact has attracted significant interest among finance researchers and investment practitioners. However, until now, consensus has not been reached regarding the causes of the relationship based on real market data because extracting valuable variables (such as information-driven trade volume) from real data is difficult. This lack of general consensus motivates us to develop a simple agent-based computational artificial stock market where extracting the necessary variables is easy. Based on this model and its artificial data, our tests have found that the aggressive trading style of informed agents can produce a price-volume relationship. Therefore, the information spreading process is not a necessary condition for producing price-volume relationship.

  1. Ultrasonically extracted β-d-glucan from artificially cultivated mushroom, characteristic properties and antioxidant activity.

    PubMed

    Alzorqi, Ibrahim; Sudheer, Surya; Lu, Ting-Jang; Manickam, Sivakumar

    2017-03-01

    Ganoderma mushroom cultivated recently in Malaysia to produce chemically different nutritional fibers has attracted the attention of the local market. The extraction methods, molecular weight and degree of branching of (1-3; 1-6)-β-d-glucan polysaccharides is of prime importance to determine its antioxidant bioactivity. Therefore three extraction methods i.e. hot water extraction (HWE), soxhlet extraction (SE) and ultrasound assisted extraction (US) were employed to study the total content of (1-3; 1-6)-β-d-glucans, degree of branching, structural characteristics, monosaccharides composition, as well as the total yield of polysaccharides that could be obtained from the artificially cultivated Ganoderma. The physical characteristics by HPAEC-PAD, HPGPC and FTIR, as well as the antioxidant in vitro assays of DPPH scavenging activity and ferric reducing power (FRAP) indicated that (1-3; 1-6)-β-d-glucans of Malaysian mushroom have better antioxidant activity, higher molecular weight and optimal degree of branching when extracted by US in comparison with conventional methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. A post-processing algorithm for time domain pitch trackers

    NASA Astrophysics Data System (ADS)

    Specker, P.

    1983-01-01

    This paper describes a powerful post-processing algorithm for time-domain pitch trackers. On two successive passes, the post-processing algorithm eliminates errors produced during a first pass by a time-domain pitch tracker. During the second pass, incorrect pitch values are detected as outliers by computing the distribution of values over a sliding 80 msec window. During the third pass (based on artificial intelligence techniques), remaining pitch pulses are used as anchor points to reconstruct the pitch train from the original waveform. The algorithm produced a decrease in the error rate from 21% obtained with the original time domain pitch tracker to 2% for isolated words and sentences produced in an office environment by 3 male and 3 female talkers. In a noisy computer room errors decreased from 52% to 2.9% for the same stimuli produced by 2 male talkers. The algorithm is efficient, accurate, and resistant to noise. The fundamental frequency micro-structure is tracked sufficiently well to be used in extracting phonetic features in a feature-based recognition system.

  3. Neural network classification of myoelectric signal for prosthesis control.

    PubMed

    Kelly, M F; Parker, P A; Scott, R N

    1991-12-01

    An alternate approach to deriving control for multidegree of freedom prosthetic arms is considered. By analyzing a single-channel myoelectric signal (MES), we can extract information that can be used to identify different contraction patterns in the upper arm. These contraction patterns are generated by subjects without previous training and are naturally associated with specific functions. Using a set of normalized MES spectral features, we can identify contraction patterns for four arm functions, specifically extension and flexion of the elbow and pronation and supination of the forearm. Performing identification independent of signal power is advantageous because this can then be used as a means for deriving proportional rate control for a prosthesis. An artificial neural network implementation is applied in the classification task. By using three single-layer perceptron networks, the MES is classified, with the spectral representations as input features. Trials performed on five subjects with normal limbs resulted in an average classification performance level of 85% for the four functions. Copyright © 1991. Published by Elsevier Ltd.

  4. Neurons with two sites of synaptic integration learn invariant representations.

    PubMed

    Körding, K P; König, P

    2001-12-01

    Neurons in mammalian cerebral cortex combine specific responses with respect to some stimulus features with invariant responses to other stimulus features. For example, in primary visual cortex, complex cells code for orientation of a contour but ignore its position to a certain degree. In higher areas, such as the inferotemporal cortex, translation-invariant, rotation-invariant, and even view point-invariant responses can be observed. Such properties are of obvious interest to artificial systems performing tasks like pattern recognition. It remains to be resolved how such response properties develop in biological systems. Here we present an unsupervised learning rule that addresses this problem. It is based on a neuron model with two sites of synaptic integration, allowing qualitatively different effects of input to basal and apical dendritic trees, respectively. Without supervision, the system learns to extract invariance properties using temporal or spatial continuity of stimuli. Furthermore, top-down information can be smoothly integrated in the same framework. Thus, this model lends a physiological implementation to approaches of unsupervised learning of invariant-response properties.

  5. Strong convective storm nowcasting using a hybrid approach of convolutional neural network and hidden Markov model

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Jiang, Ling; Han, Lei

    2018-04-01

    Convective storm nowcasting refers to the prediction of the convective weather initiation, development, and decay in a very short term (typically 0 2 h) .Despite marked progress over the past years, severe convective storm nowcasting still remains a challenge. With the boom of machine learning, it has been well applied in various fields, especially convolutional neural network (CNN). In this paper, we build a servere convective weather nowcasting system based on CNN and hidden Markov model (HMM) using reanalysis meteorological data. The goal of convective storm nowcasting is to predict if there is a convective storm in 30min. In this paper, we compress the VDRAS reanalysis data to low-dimensional data by CNN as the observation vector of HMM, then obtain the development trend of strong convective weather in the form of time series. It shows that, our method can extract robust features without any artificial selection of features, and can capture the development trend of strong convective storm.

  6. EEG-Based Computer Aided Diagnosis of Autism Spectrum Disorder Using Wavelet, Entropy, and ANN

    PubMed Central

    AlSharabi, Khalil; Ibrahim, Sutrisno; Alsuwailem, Abdullah

    2017-01-01

    Autism spectrum disorder (ASD) is a type of neurodevelopmental disorder with core impairments in the social relationships, communication, imagination, or flexibility of thought and restricted repertoire of activity and interest. In this work, a new computer aided diagnosis (CAD) of autism ‎based on electroencephalography (EEG) signal analysis is investigated. The proposed method is based on discrete wavelet transform (DWT), entropy (En), and artificial neural network (ANN). DWT is used to decompose EEG signals into approximation and details coefficients to obtain EEG subbands. The feature vector is constructed by computing Shannon entropy values from each EEG subband. ANN classifies the corresponding EEG signal into normal or autistic based on the extracted features. The experimental results show the effectiveness of the proposed method for assisting autism diagnosis. A receiver operating characteristic (ROC) curve metric is used to quantify the performance of the proposed method. The proposed method obtained promising results tested using real dataset provided by King Abdulaziz Hospital, Jeddah, Saudi Arabia. PMID:28484720

  7. Magnetic Flux Leakage Sensing and Artificial Neural Network Pattern Recognition-Based Automated Damage Detection and Quantification for Wire Rope Non-Destructive Evaluation.

    PubMed

    Kim, Ju-Won; Park, Seunghee

    2018-01-02

    In this study, a magnetic flux leakage (MFL) method, known to be a suitable non-destructive evaluation (NDE) method for continuum ferromagnetic structures, was used to detect local damage when inspecting steel wire ropes. To demonstrate the proposed damage detection method through experiments, a multi-channel MFL sensor head was fabricated using a Hall sensor array and magnetic yokes to adapt to the wire rope. To prepare the damaged wire-rope specimens, several different amounts of artificial damages were inflicted on wire ropes. The MFL sensor head was used to scan the damaged specimens to measure the magnetic flux signals. After obtaining the signals, a series of signal processing steps, including the enveloping process based on the Hilbert transform (HT), was performed to better recognize the MFL signals by reducing the unexpected noise. The enveloped signals were then analyzed for objective damage detection by comparing them with a threshold that was established based on the generalized extreme value (GEV) distribution. The detected MFL signals that exceed the threshold were analyzed quantitatively by extracting the magnetic features from the MFL signals. To improve the quantitative analysis, damage indexes based on the relationship between the enveloped MFL signal and the threshold value were also utilized, along with a general damage index for the MFL method. The detected MFL signals for each damage type were quantified by using the proposed damage indexes and the general damage indexes for the MFL method. Finally, an artificial neural network (ANN) based multi-stage pattern recognition method using extracted multi-scale damage indexes was implemented to automatically estimate the severity of the damage. To analyze the reliability of the MFL-based automated wire rope NDE method, the accuracy and reliability were evaluated by comparing the repeatedly estimated damage size and the actual damage size.

  8. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  9. Automated interpretation of ventilation-perfusion lung scintigrams for the diagnosis of pulmonary embolism using artificial neural networks.

    PubMed

    Holst, H; Aström, K; Järund, A; Palmer, J; Heyden, A; Kahl, F; Tägil, K; Evander, E; Sparr, G; Edenbrandt, L

    2000-04-01

    The purpose of this study was to develop a completely automated method for the interpretation of ventilation-perfusion (V-P) lung scintigrams used in the diagnosis of pulmonary embolism. An artificial neural network was trained for the diagnosis of pulmonary embolism using 18 automatically obtained features from each set of V-P scintigrams. The techniques used to process the images included their alignment to templates, the construction of quotient images based on the ventilation and perfusion images, and the calculation of measures describing V-P mismatches in the quotient images. The templates represented lungs of normal size and shape without any pathological changes. Images that could not be properly aligned to the templates were detected and excluded automatically. After exclusion of those V-P scintigrams not properly aligned to the templates, 478 V-P scintigrams remained in a training group of consecutive patients with suspected pulmonary embolism, and a further 87 V-P scintigrams formed a separate test group comprising patients who had undergone pulmonary angiography. The performance of the neural network, measured as the area under the receiver operating characteristic curve, was 0.87 (95% confidence limits 0.82-0.92) in the training group and 0.79 (0.69-0.88) in the test group. It is concluded that a completely automated method can be used for the interpretation of V-P scintigrams. The performance of this method is similar to others previously presented, whereby features were extracted manually.

  10. The relationship study between image features and detection probability based on psychology experiments

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Chen, Yu-hua; Wang, Ji-yuan; Gao, Hong-sheng; Wang, Ji-jun; Su, Rong-hua; Mao, Wei

    2011-04-01

    Detection probability is an important index to represent and estimate target viability, which provides basis for target recognition and decision-making. But it will expend a mass of time and manpower to obtain detection probability in reality. At the same time, due to the different interpretation of personnel practice knowledge and experience, a great difference will often exist in the datum obtained. By means of studying the relationship between image features and perception quantity based on psychology experiments, the probability model has been established, in which the process is as following.Firstly, four image features have been extracted and quantified, which affect directly detection. Four feature similarity degrees between target and background were defined. Secondly, the relationship between single image feature similarity degree and perception quantity was set up based on psychological principle, and psychological experiments of target interpretation were designed which includes about five hundred people for interpretation and two hundred images. In order to reduce image features correlativity, a lot of artificial synthesis images have been made which include images with single brightness feature difference, images with single chromaticity feature difference, images with single texture feature difference and images with single shape feature difference. By analyzing and fitting a mass of experiments datum, the model quantitys have been determined. Finally, by applying statistical decision theory and experimental results, the relationship between perception quantity with target detection probability has been found. With the verification of a great deal of target interpretation in practice, the target detection probability can be obtained by the model quickly and objectively.

  11. Natural and Artificial Playing Fields: Characteristics and Safety Features.

    ERIC Educational Resources Information Center

    Schmidt, Roger C., Ed.; Hoerner, Earl F., Ed.; Milner, Edward M., Ed.; Morehouse, C. A., Ed.

    These papers are on the subjects of playing field standards, surface traction, testing and correlation to actual field experience, and state-of-the-art natural and artificial surfaces. The papers, presented at the Symposium on the Characteristics and Safety of Playing Surfaces (Artificial and Natural) for Field Sports in 1998, cover the…

  12. Robust spike classification based on frequency domain neural waveform features.

    PubMed

    Yang, Chenhui; Yuan, Yuan; Si, Jennie

    2013-12-01

    We introduce a new spike classification algorithm based on frequency domain features of the spike snippets. The goal for the algorithm is to provide high classification accuracy, low false misclassification, ease of implementation, robustness to signal degradation, and objectivity in classification outcomes. In this paper, we propose a spike classification algorithm based on frequency domain features (CFDF). It makes use of frequency domain contents of the recorded neural waveforms for spike classification. The self-organizing map (SOM) is used as a tool to determine the cluster number intuitively and directly by viewing the SOM output map. After that, spike classification can be easily performed using clustering algorithms such as the k-Means. In conjunction with our previously developed multiscale correlation of wavelet coefficient (MCWC) spike detection algorithm, we show that the MCWC and CFDF detection and classification system is robust when tested on several sets of artificial and real neural waveforms. The CFDF is comparable to or outperforms some popular automatic spike classification algorithms with artificial and real neural data. The detection and classification of neural action potentials or neural spikes is an important step in single-unit-based neuroscientific studies and applications. After the detection of neural snippets potentially containing neural spikes, a robust classification algorithm is applied for the analysis of the snippets to (1) extract similar waveforms into one class for them to be considered coming from one unit, and to (2) remove noise snippets if they do not contain any features of an action potential. Usually, a snippet is a small 2 or 3 ms segment of the recorded waveform, and differences in neural action potentials can be subtle from one unit to another. Therefore, a robust, high performance classification system like the CFDF is necessary. In addition, the proposed algorithm does not require any assumptions on statistical properties of the noise and proves to be robust under noise contamination.

  13. Natural water purification and water management by artificial groundwater recharge

    PubMed Central

    Balke, Klaus-Dieter; Zhu, Yan

    2008-01-01

    Worldwide, several regions suffer from water scarcity and contamination. The infiltration and subsurface storage of rain and river water can reduce water stress. Artificial groundwater recharge, possibly combined with bank filtration, plant purification and/or the use of subsurface dams and artificial aquifers, is especially advantageous in areas where layers of gravel and sand exist below the earth’s surface. Artificial infiltration of surface water into the uppermost aquifer has qualitative and quantitative advantages. The contamination of infiltrated river water will be reduced by natural attenuation. Clay minerals, iron hydroxide and humic matter as well as microorganisms located in the subsurface have high decontamination capacities. By this, a final water treatment, if necessary, becomes much easier and cheaper. The quantitative effect concerns the seasonally changing river discharge that influences the possibility of water extraction for drinking water purposes. Such changes can be equalised by seasonally adapted infiltration/extraction of water in/out of the aquifer according to the river discharge and the water need. This method enables a continuous water supply over the whole year. Generally, artificially recharged groundwater is better protected against pollution than surface water, and the delimitation of water protection zones makes it even more save. PMID:18357624

  14. Natural water purification and water management by artificial groundwater recharge.

    PubMed

    Balke, Klaus-Dieter; Zhu, Yan

    2008-03-01

    Worldwide, several regions suffer from water scarcity and contamination. The infiltration and subsurface storage of rain and river water can reduce water stress. Artificial groundwater recharge, possibly combined with bank filtration, plant purification and/or the use of subsurface dams and artificial aquifers, is especially advantageous in areas where layers of gravel and sand exist below the earth's surface. Artificial infiltration of surface water into the uppermost aquifer has qualitative and quantitative advantages. The contamination of infiltrated river water will be reduced by natural attenuation. Clay minerals, iron hydroxide and humic matter as well as microorganisms located in the subsurface have high decontamination capacities. By this, a final water treatment, if necessary, becomes much easier and cheaper. The quantitative effect concerns the seasonally changing river discharge that influences the possibility of water extraction for drinking water purposes. Such changes can be equalised by seasonally adapted infiltration/extraction of water in/out of the aquifer according to the river discharge and the water need. This method enables a continuous water supply over the whole year. Generally, artificially recharged groundwater is better protected against pollution than surface water, and the delimitation of water protection zones makes it even more save.

  15. Beyond HRV: attractor reconstruction using the entire cardiovascular waveform data for novel feature extraction.

    PubMed

    Aston, Philip J; Christie, Mark I; Huang, Ying H; Nandi, Manasi

    2018-03-01

    Advances in monitoring technology allow blood pressure waveforms to be collected at sampling frequencies of 250-1000 Hz for long time periods. However, much of the raw data are under-analysed. Heart rate variability (HRV) methods, in which beat-to-beat interval lengths are extracted and analysed, have been extensively studied. However, this approach discards the majority of the raw data. Our aim is to detect changes in the shape of the waveform in long streams of blood pressure data. Our approach involves extracting key features from large complex data sets by generating a reconstructed attractor in a three-dimensional phase space using delay coordinates from a window of the entire raw waveform data. The naturally occurring baseline variation is removed by projecting the attractor onto a plane from which new quantitative measures are obtained. The time window is moved through the data to give a collection of signals which relate to various aspects of the waveform shape. This approach enables visualisation and quantification of changes in the waveform shape and has been applied to blood pressure data collected from conscious unrestrained mice and to human blood pressure data. The interpretation of the attractor measures is aided by the analysis of simple artificial waveforms. We have developed and analysed a new method for analysing blood pressure data that uses all of the waveform data and hence can detect changes in the waveform shape that HRV methods cannot, which is confirmed with an example, and hence our method goes 'beyond HRV'.

  16. Beyond HRV: attractor reconstruction using the entire cardiovascular waveform data for novel feature extraction

    PubMed Central

    Aston, Philip J; Christie, Mark I; Huang, Ying H; Nandi, Manasi

    2018-01-01

    Abstract Advances in monitoring technology allow blood pressure waveforms to be collected at sampling frequencies of 250–1000 Hz for long time periods. However, much of the raw data are under-analysed. Heart rate variability (HRV) methods, in which beat-to-beat interval lengths are extracted and analysed, have been extensively studied. However, this approach discards the majority of the raw data. Objective: Our aim is to detect changes in the shape of the waveform in long streams of blood pressure data. Approach: Our approach involves extracting key features from large complex data sets by generating a reconstructed attractor in a three-dimensional phase space using delay coordinates from a window of the entire raw waveform data. The naturally occurring baseline variation is removed by projecting the attractor onto a plane from which new quantitative measures are obtained. The time window is moved through the data to give a collection of signals which relate to various aspects of the waveform shape. Main results: This approach enables visualisation and quantification of changes in the waveform shape and has been applied to blood pressure data collected from conscious unrestrained mice and to human blood pressure data. The interpretation of the attractor measures is aided by the analysis of simple artificial waveforms. Significance: We have developed and analysed a new method for analysing blood pressure data that uses all of the waveform data and hence can detect changes in the waveform shape that HRV methods cannot, which is confirmed with an example, and hence our method goes ‘beyond HRV’. PMID:29350622

  17. Haptic exploration of fingertip-sized geometric features using a multimodal tactile sensor

    NASA Astrophysics Data System (ADS)

    Ponce Wong, Ruben D.; Hellman, Randall B.; Santos, Veronica J.

    2014-06-01

    Haptic perception remains a grand challenge for artificial hands. Dexterous manipulators could be enhanced by "haptic intelligence" that enables identification of objects and their features via touch alone. Haptic perception of local shape would be useful when vision is obstructed or when proprioceptive feedback is inadequate, as observed in this study. In this work, a robot hand outfitted with a deformable, bladder-type, multimodal tactile sensor was used to replay four human-inspired haptic "exploratory procedures" on fingertip-sized geometric features. The geometric features varied by type (bump, pit), curvature (planar, conical, spherical), and footprint dimension (1.25 - 20 mm). Tactile signals generated by active fingertip motions were used to extract key parameters for use as inputs to supervised learning models. A support vector classifier estimated order of curvature while support vector regression models estimated footprint dimension once curvature had been estimated. A distal-proximal stroke (along the long axis of the finger) enabled estimation of order of curvature with an accuracy of 97%. Best-performing, curvature-specific, support vector regression models yielded R2 values of at least 0.95. While a radial-ulnar stroke (along the short axis of the finger) was most helpful for estimating feature type and size for planar features, a rolling motion was most helpful for conical and spherical features. The ability to haptically perceive local shape could be used to advance robot autonomy and provide haptic feedback to human teleoperators of devices ranging from bomb defusal robots to neuroprostheses.

  18. Effect of humidity during artificial extraction on the subsequent vigor of pine pollen

    Treesearch

    Russell A. Ryker

    1963-01-01

    Controlled pollination of pines generally has been disappointing because cones contain too few seeds. We need to develop better techniques for collecting, extracting, and storing pollen, as well as better bagging procedures. A logical first step is to learn more about collecting and extracting pollen. In a recent study I found that extracting pollen of jack pine (Pinus...

  19. Research on the feature extraction and pattern recognition of the distributed optical fiber sensing signal

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan

    2014-09-01

    In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.

  20. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  1. Neural network explanation using inversion.

    PubMed

    Saad, Emad W; Wunsch, Donald C

    2007-01-01

    An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.

  2. Spectroscopic Diagnosis of Arsenic Contamination in Agricultural Soils

    PubMed Central

    Shi, Tiezhu; Liu, Huizeng; Chen, Yiyun; Fei, Teng; Wang, Junjie; Wu, Guofeng

    2017-01-01

    This study investigated the abilities of pre-processing, feature selection and machine-learning methods for the spectroscopic diagnosis of soil arsenic contamination. The spectral data were pre-processed by using Savitzky-Golay smoothing, first and second derivatives, multiplicative scatter correction, standard normal variate, and mean centering. Principle component analysis (PCA) and the RELIEF algorithm were used to extract spectral features. Machine-learning methods, including random forests (RF), artificial neural network (ANN), radial basis function- and linear function- based support vector machine (RBF- and LF-SVM) were employed for establishing diagnosis models. The model accuracies were evaluated and compared by using overall accuracies (OAs). The statistical significance of the difference between models was evaluated by using McNemar’s test (Z value). The results showed that the OAs varied with the different combinations of pre-processing, feature selection, and classification methods. Feature selection methods could improve the modeling efficiencies and diagnosis accuracies, and RELIEF often outperformed PCA. The optimal models established by RF (OA = 86%), ANN (OA = 89%), RBF- (OA = 89%) and LF-SVM (OA = 87%) had no statistical difference in diagnosis accuracies (Z < 1.96, p < 0.05). These results indicated that it was feasible to diagnose soil arsenic contamination using reflectance spectroscopy. The appropriate combination of multivariate methods was important to improve diagnosis accuracies. PMID:28471412

  3. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  4. Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tsui, Kwok-Leung

    2017-05-01

    Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.

  5. Artificial intelligence in sports on the example of weight training.

    PubMed

    Novatchkov, Hristo; Baca, Arnold

    2013-01-01

    The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key pointsArtificial intelligence is a promising field for sport-related analysis.Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements.Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates.

  6. Artificial Intelligence in Sports on the Example of Weight Training

    PubMed Central

    Novatchkov, Hristo; Baca, Arnold

    2013-01-01

    The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key points Artificial intelligence is a promising field for sport-related analysis. Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements. Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates. PMID:24149722

  7. The Temporal Dynamics of Regularity Extraction in Non-Human Primates

    ERIC Educational Resources Information Center

    Minier, Laure; Fagot, Joël; Rey, Arnaud

    2016-01-01

    Extracting the regularities of our environment is one of our core cognitive abilities. To study the fine-grained dynamics of the extraction of embedded regularities, a method combining the advantages of the artificial language paradigm (Saffran, Aslin, & Newport, [Saffran, J. R., 1996]) and the serial response time task (Nissen & Bullemer,…

  8. Image Quality Assessment of High-Resolution Satellite Images with Mtf-Based Fuzzy Comprehensive Evaluation Method

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Luo, Z.; Zhang, Y.; Guo, F.; He, L.

    2018-04-01

    A Modulation Transfer Function (MTF)-based fuzzy comprehensive evaluation method was proposed in this paper for the purpose of evaluating high-resolution satellite image quality. To establish the factor set, two MTF features and seven radiant features were extracted from the knife-edge region of image patch, which included Nyquist, MTF0.5, entropy, peak signal to noise ratio (PSNR), average difference, edge intensity, average gradient, contrast and ground spatial distance (GSD). After analyzing the statistical distribution of above features, a fuzzy evaluation threshold table and fuzzy evaluation membership functions was established. The experiments for comprehensive quality assessment of different natural and artificial objects was done with GF2 image patches. The results showed that the calibration field image has the highest quality scores. The water image has closest image quality to the calibration field, quality of building image is a little poor than water image, but much higher than farmland image. In order to test the influence of different features on quality evaluation, the experiment with different weights were tested on GF2 and SPOT7 images. The results showed that different weights correspond different evaluating effectiveness. In the case of setting up the weights of edge features and GSD, the image quality of GF2 is better than SPOT7. However, when setting MTF and PSNR as main factor, the image quality of SPOT7 is better than GF2.

  9. Explore Efficient Local Features from RGB-D Data for One-Shot Learning Gesture Recognition.

    PubMed

    Wan, Jun; Guo, Guodong; Li, Stan Z

    2016-08-01

    Availability of handy RGB-D sensors has brought about a surge of gesture recognition research and applications. Among various approaches, one shot learning approach is advantageous because it requires minimum amount of data. Here, we provide a thorough review about one-shot learning gesture recognition from RGB-D data and propose a novel spatiotemporal feature extracted from RGB-D data, namely mixed features around sparse keypoints (MFSK). In the review, we analyze the challenges that we are facing, and point out some future research directions which may enlighten researchers in this field. The proposed MFSK feature is robust and invariant to scale, rotation and partial occlusions. To alleviate the insufficiency of one shot training samples, we augment the training samples by artificially synthesizing versions of various temporal scales, which is beneficial for coping with gestures performed at varying speed. We evaluate the proposed method on the Chalearn gesture dataset (CGD). The results show that our approach outperforms all currently published approaches on the challenging data of CGD, such as translated, scaled and occluded subsets. When applied to the RGB-D datasets that are not one-shot (e.g., the Cornell Activity Dataset-60 and MSR Daily Activity 3D dataset), the proposed feature also produces very promising results under leave-one-out cross validation or one-shot learning.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmon, S; Jeraj, R; Galavis, P

    Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patchesmore » along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity and correlation of various texture features were shown to significantly differ between 2D and 3D extraction. Additionally, inter-feature correlations were more sensitive to reconstruction variation using single-plane extraction. This work highlights a need for standardized feature extraction/selection techniques in radiomics.« less

  11. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isa, Nor Ashidi Mat

    The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability ofmore » the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.« less

  12. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    NASA Astrophysics Data System (ADS)

    Isa, Nor Ashidi Mat

    2015-05-01

    The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability of the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.

  13. Target recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  14. The Growing Threat of Light Pollution to Ground-Based Observatories

    NASA Astrophysics Data System (ADS)

    Green, Richard F.; Luginbuhl, Christian; Wainscoat, Richard J.; Duriscoe, Dan

    2018-01-01

    With few exceptions, growing sky glow from artificial sources negatively impacts the sky background recorded at major observatories around the world. We report techniques for measuring night sky brightness and extracting the contribution of artificial sky glow at observatories and other protected sites. The increase in artificial ambient light and its changing spectrum with LED replacements is likely to be significant. A compendium of worldwide regulatory approaches to astronomical site protection gives insight on multiple effective strategies.

  15. Classification of intelligence quotient via brainwave sub-band power ratio features and artificial neural network.

    PubMed

    Jahidin, A H; Megat Ali, M S A; Taib, M N; Tahir, N Md; Yassin, I M; Lias, S

    2014-04-01

    This paper elaborates on the novel intelligence assessment method using the brainwave sub-band power ratio features. The study focuses only on the left hemisphere brainwave in its relaxed state. Distinct intelligence quotient groups have been established earlier from the score of the Raven Progressive Matrices. Sub-band power ratios are calculated from energy spectral density of theta, alpha and beta frequency bands. Synthetic data have been generated to increase dataset from 50 to 120. The features are used as input to the artificial neural network. Subsequently, the brain behaviour model has been developed using an artificial neural network that is trained with optimized learning rate, momentum constant and hidden nodes. Findings indicate that the distinct intelligence quotient groups can be classified from the brainwave sub-band power ratios with 100% training and 88.89% testing accuracies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. The study of integration about measurable image and 4D production

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun

    2008-12-01

    In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.

  17. From automata to animate beings: the scope and limits of attributing socialness to artificial agents.

    PubMed

    Hortensius, Ruud; Cross, Emily S

    2018-05-11

    Understanding the mechanisms and consequences of attributing socialness to artificial agents has important implications for how we can use technology to lead more productive and fulfilling lives. Here, we integrate recent findings on the factors that shape behavioral and brain mechanisms that support social interactions between humans and artificial agents. We review how visual features of an agent, as well as knowledge factors within the human observer, shape attributions across dimensions of socialness. We explore how anthropomorphism and dehumanization further influence how we perceive and interact with artificial agents. Based on these findings, we argue that the cognitive reconstruction within the human observer is likely to be far more crucial in shaping our interactions with artificial agents than previously thought, while the artificial agent's visual features are possibly of lesser importance. We combine these findings to provide an integrative theoretical account based on the "like me" hypothesis, and discuss the key role played by the Theory-of-Mind network, especially the temporal parietal junction, in the shift from mechanistic to social attributions. We conclude by highlighting outstanding questions on the impact of long-term interactions with artificial agents on the behavioral and brain mechanisms of attributing socialness to these agents. © 2018 New York Academy of Sciences.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    AllamehZadeh, Mostafa, E-mail: dibaparima@yahoo.com

    A Quadratic Neural Networks (QNNs) model has been developed for identifying seismic source classification problem at regional distances using ARMA coefficients determination by Artificial Neural Networks (ANNs). We have devised a supervised neural system to discriminate between earthquakes and chemical explosions with filter coefficients obtained by windowed P-wave phase spectra (15 s). First, we preprocess the recording's signals to cancel out instrumental and attenuation site effects and obtain a compact representation of seismic records. Second, we use a QNNs system to obtain ARMA coefficients for feature extraction in the discrimination problem. The derived coefficients are then applied to the neuralmore » system to train and classification. In this study, we explore the possibility of using single station three-component (3C) covariance matrix traces from a priori-known explosion sites (learning) for automatically recognizing subsequent explosions from the same site. The results have shown that this feature extraction gives the best classifier for seismic signals and performs significantly better than other classification methods. The events have been tested, which include 36 chemical explosions at the Semipalatinsk test site in Kazakhstan and 61 earthquakes (mb = 5.0-6.5) recorded by the Iranian National Seismic Network (INSN). The 100% correct decisions were obtained between site explosions and some of non-site events. The above approach to event discrimination is very flexible as we can combine several 3C stations.« less

  19. Antimicrobial and Antiradical Activity of Extracts Obtained from Leaves of Five Species of the Genus Bergenia: Identification of Antimicrobial Compounds.

    PubMed

    Żbikowska, Beata; Franiczek, Roman; Sowa, Alina; Połukord, Grażyna; Krzyżanowska, Barbara; Sroka, Zbigniew

    2017-09-01

    An important focus of modern medicine is the search for new substances and strategies to combat infectious diseases, which present an increasing threat due to the growth of bacterial resistance to antibiotics. Another problem concerns free radicals, which in excess can cause several serious diseases. An alternative to chemical synthesis of antimicrobial and antiradical compounds is to find active substances in plant raw materials. We prepared extracts from leaves of five species of the genus Bergenia: B. purpurascens, B. cordifolia, B. ligulata, B. crassifolia, and B. ciliata. Antimicrobial and antiradical features of extracts and raw materials were assessed, and the quantities of phenolic compounds were determined. We also evaluated, using high-performance liquid chromatography, the amounts of arbutin and hydroquinone, compounds related to antimicrobial activity of these raw materials. The strongest antiradical properties were shown by leaves of B. crassifolia and B. cordifolia, the lowest by leaves of B. ciliata. The antiradical activity of extracts showed a strong positive correlation with the amount of phenols. All raw materials have significant antimicrobial properties. Among them, the ethyl acetate extracts were the most active. Antimicrobial activity very weakly correlated with the amount of arbutin, but correlated very strongly with the contents of both hydroquinone and phenolic compounds. Additional experiments using artificially prepared mixtures of phenolic compounds and hydroquinone allowed us to conclude that the most active antimicrobial substance is hydroquinone.

  20. Vertical Feature Mask Feature Classification Flag Extraction

    Atmospheric Science Data Center

    2013-03-28

      Vertical Feature Mask Feature Classification Flag Extraction This routine demonstrates extraction of the ... in a CALIPSO Lidar Level 2 Vertical Feature Mask feature classification flag value. It is written in Interactive Data Language (IDL) ...

  1. Artificial nose, NIR and UV-visible spectroscopy for the characterisation of the PDO Chianti Classico olive oil.

    PubMed

    Forina, M; Oliveri, P; Bagnasco, L; Simonetti, R; Casolino, M C; Nizzi Grifi, F; Casale, M

    2015-11-01

    An authentication study of the Italian PDO (Protected Designation of Origin) olive oil Chianti Classico, based on artificial nose, near-infrared and UV-visible spectroscopy, with a set of samples representative of the whole Chianti Classico production area and a considerable number of samples from other Italian PDO regions was performed. The signals provided by the three analytical techniques were used both individually and jointly, after fusion of the respective variables, in order to build a model for the Chianti Classico PDO olive oil. Different signal pre-treatments were performed in order to investigate their importance and their effects in enhancing and extracting information from experimental data, correcting backgrounds or removing baseline variations. Stepwise-Linear Discriminant Analysis (STEP-LDA) was used as a feature selection technique and, afterward, Linear Discriminant Analysis (LDA) and the class-modelling technique Quadratic Discriminant Analysis-UNEQual dispersed classes (QDA-UNEQ) were applied to sub-sets of selected variables, in order to obtain efficient models capable of characterising the extra virgin olive oils produced in the Chianti Classico PDO area. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. [Study of automatic marine oil spills detection using imaging spectroscopy].

    PubMed

    Liu, De-Lian; Han, Liang; Zhang, Jian-Qi

    2013-11-01

    To reduce artificial auxiliary works in oil spills detection process, an automatic oil spill detection method based on adaptive matched filter is presented. Firstly, the characteristics of reflectance spectral signature of C-H bond in oil spill are analyzed. And an oil spill spectral signature extraction model is designed by using the spectral feature of C-H bond. It is then used to obtain the reference spectral signature for the following oil spill detection step. Secondly, the characteristics of reflectance spectral signature of sea water, clouds, and oil spill are compared. The bands which have large difference in reflectance spectral signatures of the sea water, clouds, and oil spill are selected. By using these bands, the sea water pixels are segmented. And the background parameters are then calculated. Finally, the classical adaptive matched filter from target detection algorithms is improved and introduced for oil spill detection. The proposed method is applied to the real airborne visible infrared imaging spectrometer (AVIRIS) hyperspectral image captured during the deepwater horizon oil spill in the Gulf of Mexico for oil spill detection. The results show that the proposed method has, high efficiency, does not need artificial auxiliary work, and can be used for automatic detection of marine oil spill.

  3. Detection of epileptic seizure in EEG signals using linear least squares preprocessing.

    PubMed

    Roshan Zamir, Z

    2016-09-01

    An epileptic seizure is a transient event of abnormal excessive neuronal discharge in the brain. This unwanted event can be obstructed by detection of electrical changes in the brain that happen before the seizure takes place. The automatic detection of seizures is necessary since the visual screening of EEG recordings is a time consuming task and requires experts to improve the diagnosis. Much of the prior research in detection of seizures has been developed based on artificial neural network, genetic programming, and wavelet transforms. Although the highest achieved accuracy for classification is 100%, there are drawbacks, such as the existence of unbalanced datasets and the lack of investigations in performances consistency. To address these, four linear least squares-based preprocessing models are proposed to extract key features of an EEG signal in order to detect seizures. The first two models are newly developed. The original signal (EEG) is approximated by a sinusoidal curve. Its amplitude is formed by a polynomial function and compared with the predeveloped spline function. Different statistical measures, namely classification accuracy, true positive and negative rates, false positive and negative rates and precision, are utilised to assess the performance of the proposed models. These metrics are derived from confusion matrices obtained from classifiers. Different classifiers are used over the original dataset and the set of extracted features. The proposed models significantly reduce the dimension of the classification problem and the computational time while the classification accuracy is improved in most cases. The first and third models are promising feature extraction methods with the classification accuracy of 100%. Logistic, LazyIB1, LazyIB5, and J48 are the best classifiers. Their true positive and negative rates are 1 while false positive and negative rates are 0 and the corresponding precision values are 1. Numerical results suggest that these models are robust and efficient for detecting epileptic seizure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Extracting features from protein sequences to improve deep extreme learning machine for protein fold recognition.

    PubMed

    Ibrahim, Wisam; Abadeh, Mohammad Saniee

    2017-05-21

    Protein fold recognition is an important problem in bioinformatics to predict three-dimensional structure of a protein. One of the most challenging tasks in protein fold recognition problem is the extraction of efficient features from the amino-acid sequences to obtain better classifiers. In this paper, we have proposed six descriptors to extract features from protein sequences. These descriptors are applied in the first stage of a three-stage framework PCA-DELM-LDA to extract feature vectors from the amino-acid sequences. Principal Component Analysis PCA has been implemented to reduce the number of extracted features. The extracted feature vectors have been used with original features to improve the performance of the Deep Extreme Learning Machine DELM in the second stage. Four new features have been extracted from the second stage and used in the third stage by Linear Discriminant Analysis LDA to classify the instances into 27 folds. The proposed framework is implemented on the independent and combined feature sets in SCOP datasets. The experimental results show that extracted feature vectors in the first stage could improve the performance of DELM in extracting new useful features in second stage. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. [Review and prospect of analysis on UHMWPE wear debris in artificial hip joints].

    PubMed

    Wu, Jingping; Yuan, Chengqing; Yan, Xinping

    2010-02-01

    This paper briefly reviews the latest progress in the analyses of the technologies for artificial hip joints; and in the researches directed to the features of UHMWPE debris obtained from all kinds of experimental conditions, to the wear process and wear mechanism, and to the factors which influence the wear mechanism. Furthermore, the signification of debris atlas was illustrated. Finally, future directions to be furthered were considered and envisaged. It is suggested that emphases be laid on the relationship between the UHMWPE debris feature and the wear mechanism, and be laid synergistic effects of biochemical environment and loading environment so as to establish the predictive wear models of artificial hip joints.

  6. Artificial intelligence for analyzing orthopedic trauma radiographs

    PubMed Central

    Olczak, Jakub; Fahlberg, Niklas; Maki, Atsuto; Razavian, Ali Sharif; Jilert, Anthony; Stark, André; Sköldenberg, Olof

    2017-01-01

    Background and purpose — Recent advances in artificial intelligence (deep learning) have shown remarkable performance in classifying non-medical images, and the technology is believed to be the next technological revolution. So far it has never been applied in an orthopedic setting, and in this study we sought to determine the feasibility of using deep learning for skeletal radiographs. Methods — We extracted 256,000 wrist, hand, and ankle radiographs from Danderyd’s Hospital and identified 4 classes: fracture, laterality, body part, and exam view. We then selected 5 openly available deep learning networks that were adapted for these images. The most accurate network was benchmarked against a gold standard for fractures. We furthermore compared the network’s performance with 2 senior orthopedic surgeons who reviewed images at the same resolution as the network. Results — All networks exhibited an accuracy of at least 90% when identifying laterality, body part, and exam view. The final accuracy for fractures was estimated at 83% for the best performing network. The network performed similarly to senior orthopedic surgeons when presented with images at the same resolution as the network. The 2 reviewer Cohen’s kappa under these conditions was 0.76. Interpretation — This study supports the use for orthopedic radiographs of artificial intelligence, which can perform at a human level. While current implementation lacks important features that surgeons require, e.g. risk of dislocation, classifications, measurements, and combining multiple exam views, these problems have technical solutions that are waiting to be implemented for orthopedics. PMID:28681679

  7. Artificial intelligence for analyzing orthopedic trauma radiographs.

    PubMed

    Olczak, Jakub; Fahlberg, Niklas; Maki, Atsuto; Razavian, Ali Sharif; Jilert, Anthony; Stark, André; Sköldenberg, Olof; Gordon, Max

    2017-12-01

    Background and purpose - Recent advances in artificial intelligence (deep learning) have shown remarkable performance in classifying non-medical images, and the technology is believed to be the next technological revolution. So far it has never been applied in an orthopedic setting, and in this study we sought to determine the feasibility of using deep learning for skeletal radiographs. Methods - We extracted 256,000 wrist, hand, and ankle radiographs from Danderyd's Hospital and identified 4 classes: fracture, laterality, body part, and exam view. We then selected 5 openly available deep learning networks that were adapted for these images. The most accurate network was benchmarked against a gold standard for fractures. We furthermore compared the network's performance with 2 senior orthopedic surgeons who reviewed images at the same resolution as the network. Results - All networks exhibited an accuracy of at least 90% when identifying laterality, body part, and exam view. The final accuracy for fractures was estimated at 83% for the best performing network. The network performed similarly to senior orthopedic surgeons when presented with images at the same resolution as the network. The 2 reviewer Cohen's kappa under these conditions was 0.76. Interpretation - This study supports the use for orthopedic radiographs of artificial intelligence, which can perform at a human level. While current implementation lacks important features that surgeons require, e.g. risk of dislocation, classifications, measurements, and combining multiple exam views, these problems have technical solutions that are waiting to be implemented for orthopedics.

  8. Temperature-controlled ionic liquid-based ultrasound-assisted microextraction for preconcentration of trace quantity of cadmium and nickel by using organic ligand in artificial saliva extract of smokeless tobacco products

    NASA Astrophysics Data System (ADS)

    Arain, Sadaf Sadia; Kazi, Tasneem Gul; Arain, Asma Jabeen; Afridi, Hassan Imran; Baig, Jameel Ahmed; Brahman, Kapil Dev; Naeemullah; Arain, Salma Aslam

    2015-03-01

    A new approach was developed for the preconcentration of cadmium (Cd) and nickel (Ni) in artificial saliva extract of dry snuff (brown and black) products using temperature-controlled ionic liquid-based ultrasound-assisted dispersive liquid-liquid microextraction (TIL-UDLLμE) followed by electrothermal atomic absorption spectrometry (ETAAS). The Cd and Ni were complexed with ammonium pyrrolidinedithiocarbamate (APDC), extracted in ionic liquid drops, 1-butyl-3-methylimidazolium hexafluorophosphate [C4MIM][PF6]. The multivariate strategy was applied to estimate the optimum values of experimental variables influence the % recovery of analytes by TIL-UDLLμE method. At optimum experimental conditions, the limit of detection (3s) were 0.05 and 0.14 μg L-1 while relative standard deviations (% RSD) were 3.97 and 3.55 for Cd and Ni respectively. After extraction, the enhancement factors (EF) were 87 and 79 for Cd and Ni, respectively. The RSD for six replicates of 10 μg L-1 Cd and Ni were 3.97% and 3.55% respectively. To validate the proposed method, certified reference material (CRM) of Virginia tobacco leaves was analyzed, and the determined values of Cd and Ni were in good agreement with the certified values. The concentration of Cd and Ni in artificial saliva extracts corresponds to 39-52% and 21-32%, respectively, of the total contents of both elements in dry brown and black snuff products.

  9. Registration of Laser Scanning Point Clouds and Aerial Images Using either Artificial or Natural Tie Features

    NASA Astrophysics Data System (ADS)

    Rönnholm, P.; Haggrén, H.

    2012-07-01

    Integration of laser scanning data and photographs is an excellent combination regarding both redundancy and complementary. Applications of integration vary from sensor and data calibration to advanced classification and scene understanding. In this research, only airborne laser scanning and aerial images are considered. Currently, the initial registration is solved using direct orientation sensors GPS and inertial measurements. However, the accuracy is not usually sufficient for reliable integration of data sets, and thus the initial registration needs to be improved. A registration of data from different sources requires searching and measuring of accurate tie features. Usually, points, lines or planes are preferred as tie features. Therefore, the majority of resent methods rely highly on artificial objects, such as buildings, targets or road paintings. However, in many areas no such objects are available. For example in forestry areas, it would be advantageous to be able to improve registration between laser data and images without making additional ground measurements. Therefore, there is a need to solve registration using only natural features, such as vegetation and ground surfaces. Using vegetation as tie features is challenging, because the shape and even location of vegetation can change because of wind, for example. The aim of this article was to compare registration accuracies derived by using either artificial or natural tie features. The test area included urban objects as well as trees and other vegetation. In this area, two registrations were performed, firstly, using mainly built objects and, secondly, using only vegetation and ground surface. The registrations were solved applying the interactive orientation method. As a result, using artificial tie features leaded to a successful registration in all directions of the coordinate system axes. In the case of using natural tie features, however, the detection of correct heights was difficult causing also some tilt errors. The planimetric registration was accurate.

  10. Application of artificial intelligence to risk analysis for forested ecosystems

    Treesearch

    Daniel L. Schmoldt

    2001-01-01

    Forest ecosystems are subject to a variety of natural and anthropogenic disturbances that extract a penalty from human population values. Such value losses (undesirable effects) combined with their likelihoods of occurrence constitute risk. Assessment or prediction of risk for various events is an important aid to forest management. Artificial intelligence (AI)...

  11. Sleep and wake phase of heart beat dynamics by artificial insymmetrised patterns

    NASA Astrophysics Data System (ADS)

    Dudkowska, A.; Makowiec, D.

    2004-05-01

    In order to determine differences between healthy patients and patients with congestive heart failure we apply the artificial insymmetrised pattern (AIP) method. The AIP method by exploring a human eye ability to extract regularities and read symmetries in a dot pattern, serves a tool for qualitative discrimination of heart rate states.

  12. Iris recognition based on key image feature extraction.

    PubMed

    Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y

    2008-01-01

    In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.

  13. Experience improves feature extraction in Drosophila.

    PubMed

    Peng, Yueqing; Xi, Wang; Zhang, Wei; Zhang, Ke; Guo, Aike

    2007-05-09

    Previous exposure to a pattern in the visual scene can enhance subsequent recognition of that pattern in many species from honeybees to humans. However, whether previous experience with a visual feature of an object, such as color or shape, can also facilitate later recognition of that particular feature from multiple visual features is largely unknown. Visual feature extraction is the ability to select the key component from multiple visual features. Using a visual flight simulator, we designed a novel protocol for visual feature extraction to investigate the effects of previous experience on visual reinforcement learning in Drosophila. We found that, after conditioning with a visual feature of objects among combinatorial shape-color features, wild-type flies exhibited poor ability to extract the correct visual feature. However, the ability for visual feature extraction was greatly enhanced in flies trained previously with that visual feature alone. Moreover, we demonstrated that flies might possess the ability to extract the abstract category of "shape" but not a particular shape. Finally, this experience-dependent feature extraction is absent in flies with defective MBs, one of the central brain structures in Drosophila. Our results indicate that previous experience can enhance visual feature extraction in Drosophila and that MBs are required for this experience-dependent visual cognition.

  14. Artificial intelligence and signal processing for infrastructure assessment

    NASA Astrophysics Data System (ADS)

    Assaleh, Khaled; Shanableh, Tamer; Yehia, Sherif

    2015-04-01

    The Ground Penetrating Radar (GPR) is being recognized as an effective nondestructive evaluation technique to improve the inspection process. However, data interpretation and complexity of the results impose some limitations on the practicality of using this technique. This is mainly due to the need of a trained experienced person to interpret images obtained by the GPR system. In this paper, an algorithm to classify and assess the condition of infrastructures utilizing image processing and pattern recognition techniques is discussed. Features extracted form a dataset of images of defected and healthy slabs are used to train a computer vision based system while another dataset is used to evaluate the proposed algorithm. Initial results show that the proposed algorithm is able to detect the existence of defects with about 77% success rate.

  15. Neural classification of the selected family of butterflies

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Boniecki, P.; Piekarska-Boniecka, H.; Koszela, K.; Mueller, W.; Górna, K.; Okoń, P.

    2017-07-01

    There have been noticed growing explorers' interest in drawing conclusions based on information of data coded in a graphic form. The neuronal identification of pictorial data, with special emphasis on both quantitative and qualitative analysis, is more frequently utilized to gain and deepen the empirical data knowledge. Extraction and then classification of selected picture features, such as color or surface structure, enables one to create computer tools in order to identify these objects presented as, for example, digital pictures. The work presents original computer system "Processing the image v.1.0" designed to digitalize pictures on the basis of color criterion. The system has been applied to generate a reference learning file for generating the Artificial Neural Network (ANN) to identify selected kinds of butterflies from the Papilionidae family.

  16. Weathering characteristics of wood plastic composites reinforced with extracted or delignified wood flour

    Treesearch

    Yao Chen; Nicole M. Stark; Mandla A. Tshabalala; Jianmin Gao; Yongming Fan

    2016-01-01

    This study investigated weathering performance of an HDPE wood plastic composite reinforced with extracted or delignified wood flour (WF). The wood flour was pre-extracted with three different solvents, toluene/ethanol (TE), acetone/water (AW), and hot water (HW), or sodium chlorite/acetic acid. The spectral properties of the composites before and after artificial...

  17. Text feature extraction based on deep learning: a review.

    PubMed

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  18. Systematic review and meta-analysis of studies of the timing of tracheostomy in adult patients undergoing artificial ventilation

    PubMed Central

    Griffiths, John; Barber, Vicki S; Morgan, Lesley; Young, J Duncan

    2005-01-01

    Objective To compare outcomes in critically ill patients undergoing artificial ventilation who received a tracheostomy early or late in their treatment. Data sources The Cochrane Central Register of Clinical Trials, Medline, Embase, CINAHL, the National Research Register, the NHS Trusts Clinical Trials Register, the Medical Research Council UK database, the NHS Research and Development Health Technology Assessment Programme, the British Heart Foundation database, citation review of relevant primary and review articles, and expert informants. Study selection Randomised and quasi-randomised controlled studies that compared early tracheostomy with either late tracheostomy or prolonged endotracheal intubation. From 15 950 articles screened, 12 were identified as “randomised or quasi-randomised” controlled trials, and five were included for data extraction. Data extraction Five studies with 406 participants were analysed. Descriptive and outcome data were extracted. The main outcome measure was mortality in hospital. The incidence of hospital acquired pneumonia, length of stay in a critical care unit, and duration of artificial ventilation were also recorded. Random effects meta-analyses were performed. Results Early tracheostomy did not significantly alter mortality (relative risk 0.79, 95% confidence interval 0.45 to 1.39). The risk of pneumonia was also unaltered by the timing of tracheostomy (0.90, 0.66 to 1.21). Early tracheostomy significantly reduced duration of artificial ventilation (weighted mean difference –8.5 days, 95% confidence interval –15.3 to –1.7) and length of stay in intensive care (–15.3 days, –24.6 to –6.1). Conclusions In critically ill adult patients who require prolonged mechanical ventilation, performing a tracheostomy at an earlier stage than is currently practised may shorten the duration of artificial ventilation and length of stay in intensive care. PMID:15901643

  19. Facial recognition techniques applied to the automated registration of patients in the emergency treatment of head injuries.

    PubMed

    Gooroochurn, M; Kerr, D; Bouazza-Marouf, K; Ovinis, M

    2011-02-01

    This paper describes the development of a registration framework for image-guided solutions to the automation of certain routine neurosurgical procedures. The registration process aligns the pose of the patient in the preoperative space to that of the intraoperative space. Computerized tomography images are used in the preoperative (planning) stage, whilst white light (TV camera) images are used to capture the intraoperative pose. Craniofacial landmarks, rather than artificial markers, are used as the registration basis for the alignment. To create further synergy between the user and the image-guided system, automated methods for extraction of these landmarks have been developed. The results obtained from the application of a polynomial neural network classifier based on Gabor features for the detection and localization of the selected craniofacial landmarks, namely the ear tragus and eye corners in the white light modality are presented. The robustness of the classifier to variations in intensity and noise is analysed. The results show that such a classifier gives good performance for the extraction of craniofacial landmarks.

  20. An intelligent signal processing and pattern recognition technique for defect identification using an active sensor network

    NASA Astrophysics Data System (ADS)

    Su, Zhongqing; Ye, Lin

    2004-08-01

    The practical utilization of elastic waves, e.g. Rayleigh-Lamb waves, in high-performance structural health monitoring techniques is somewhat impeded due to the complicated wave dispersion phenomena, the existence of multiple wave modes, the high susceptibility to diverse interferences, the bulky sampled data and the difficulty in signal interpretation. An intelligent signal processing and pattern recognition (ISPPR) approach using the wavelet transform and artificial neural network algorithms was developed; this was actualized in a signal processing package (SPP). The ISPPR technique comprehensively functions as signal filtration, data compression, characteristic extraction, information mapping and pattern recognition, capable of extracting essential yet concise features from acquired raw wave signals and further assisting in structural health evaluation. For validation, the SPP was applied to the prediction of crack growth in an alloy structural beam and construction of a damage parameter database for defect identification in CF/EP composite structures. It was clearly apparent that the elastic wave propagation-based damage assessment could be dramatically streamlined by introduction of the ISPPR technique.

  1. An artificial nociceptor based on a diffusive memristor.

    PubMed

    Yoon, Jung Ho; Wang, Zhongrui; Kim, Kyung Min; Wu, Huaqiang; Ravichandran, Vignesh; Xia, Qiangfei; Hwang, Cheol Seong; Yang, J Joshua

    2018-01-29

    A nociceptor is a critical and special receptor of a sensory neuron that is able to detect noxious stimulus and provide a rapid warning to the central nervous system to start the motor response in the human body and humanoid robotics. It differs from other common sensory receptors with its key features and functions, including the "no adaptation" and "sensitization" phenomena. In this study, we propose and experimentally demonstrate an artificial nociceptor based on a diffusive memristor with critical dynamics for the first time. Using this artificial nociceptor, we further built an artificial sensory alarm system to experimentally demonstrate the feasibility and simplicity of integrating such novel artificial nociceptor devices in artificial intelligence systems, such as humanoid robots.

  2. A tailored biocatalyst achieved by the rational anchoring of imidazole groups on a natural polymer: furnishing a potential artificial nuclease by sustainable materials engineering.

    PubMed

    Ferreira, José G L; Grein-Iankovski, Aline; Oliveira, Marco A S; Simas-Tosin, Fernanda F; Riegel-Vidotti, Izabel C; Orth, Elisa S

    2015-04-11

    Foreseeing the development of artificial enzymes by sustainable materials engineering, we rationally anchored reactive imidazole groups on gum arabic, a natural biocompatible polymer. The tailored biocatalyst GAIMZ demonstrated catalytic activity (>10(5)-fold) in dephosphorylation reactions with recyclable features and was effective in cleaving plasmid DNA, comprising a potential artificial nuclease.

  3. Laser-induced artificial fulgurites

    NASA Astrophysics Data System (ADS)

    Bidin, Noriah; Marsin Sanagi, Mohd; Farah, Mohammed; Naqiuddin Razali, M.; Khamis, Jamil

    2018-07-01

    Fulgurite is a natural glass created by lightning. Naturally it can be found at beaches or in deserts. Artificial fulgurite is created by immersing high-voltage electrodes in a tab of sand. Commonly, fulgurite is of interest among geoscientists, but its applications are still unknown. In the present paper, the concept of natural fulgurite generation is simulated to induce artificial fulgurite. Instead of lightning, a high-power laser beam is used as a source of transient heating. Syntactic sand from agrowaste is used as target material. Artificial fulgurite is generated after transient heating from a laser beam. The benefit of this finding can be used to extract silica from rice husk ash using laser technology.

  4. Feature extraction for document text using Latent Dirichlet Allocation

    NASA Astrophysics Data System (ADS)

    Prihatini, P. M.; Suryawan, I. K.; Mandia, IN

    2018-01-01

    Feature extraction is one of stages in the information retrieval system that used to extract the unique feature values of a text document. The process of feature extraction can be done by several methods, one of which is Latent Dirichlet Allocation. However, researches related to text feature extraction using Latent Dirichlet Allocation method are rarely found for Indonesian text. Therefore, through this research, a text feature extraction will be implemented for Indonesian text. The research method consists of data acquisition, text pre-processing, initialization, topic sampling and evaluation. The evaluation is done by comparing Precision, Recall and F-Measure value between Latent Dirichlet Allocation and Term Frequency Inverse Document Frequency KMeans which commonly used for feature extraction. The evaluation results show that Precision, Recall and F-Measure value of Latent Dirichlet Allocation method is higher than Term Frequency Inverse Document Frequency KMeans method. This shows that Latent Dirichlet Allocation method is able to extract features and cluster Indonesian text better than Term Frequency Inverse Document Frequency KMeans method.

  5. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    PubMed

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  6. Integrated feature extraction and selection for neuroimage classification

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Shen, Dinggang

    2009-02-01

    Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.

  7. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    NASA Astrophysics Data System (ADS)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  8. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  9. Effect of Galla chinensis on the remineralization of two bovine root lesions morphous in vitro.

    PubMed

    Guo, Bin; Que, Ke-Hua; Jing Yang; Wang, Bo; Liang, Qian-Qian; Xie, Hong-Hui

    2012-09-01

    The present study aims to evaluate the effect of Galla chinensis compounds on the remineralization of two artificial root lesions morphous in vitro. Sixty bovine dentine blocks were divided into two groups and individually treated with two levels of demineralization solutions to form erosive and subsurface artificial carious lesions in vitro. Each group was then divided into three subgroups, each of which were treated with a remineralization solution (positive control), deionized water (negative control), or 4 000 mg⋅L(-1) aqueous solutions of Galla chinensis extract. The dentine blocks were then subjected to a pH-cycling regime for 7 days. During the first 4 days, the daily cycle included 21-h deal and 3-h demineralization applications. The dentine blocks were dealt with the entire day during the remaining 3 days. Two specimens from each of the treatment groups were selected and observed under a polarized light microscope. Data collected using a laser scanning confocal microscope were computerized and analyzed. Galla chinensis extract clearly enhanced the remineralization of both erosive lesion and subsurface lesion patterns in the specimens (P<0.05). The level of remineralization of the erosive lesion by Galla chinensis extract was lower than that of the subsurface lesion (P<0.05). In addition, the remineralization of the subsurface lesion by Galla chinensis extract was higher than that of the remineralization solution (P<0.05). No significant difference between the remineralization of erosive lesions by Galla chinensis extract and the remineralization solution was observed (P>0.05). So Galla chinensis extract has the potential to improve the remineralization of artificial root lesions under dynamic pH-cyclic conditions, indicating its potential use as a natural remineralization medicine.

  10. F-16 Low Altitude Navigation and Targeting Infrared System for Night (LANTIRN) and the Night Close Air Support (CAS) Mission

    DTIC Science & Technology

    1989-06-02

    physical reference can be a natural feature, river bend, distinctly shaped wooded area, or lake. Artificial references such as colored smoke or...fluorescent ground panels can also be placed instead of, or as an aid to recognition of the natural feature.ដ> But, artificial references that are effective...Government Printin-gOffice, 1959. Central Intelligence Agenc7. National Intellignece Survey, East Germany, Section 23, Weather and Climate. Washington

  11. Applications of artificial intelligence systems in the analysis of epidemiological data.

    PubMed

    Flouris, Andreas D; Duffy, Jack

    2006-01-01

    A brief review of the germane literature suggests that the use of artificial intelligence (AI) statistical algorithms in epidemiology has been limited. We discuss the advantages and disadvantages of using AI systems in large-scale sets of epidemiological data to extract inherent, formerly unidentified, and potentially valuable patterns that human-driven deductive models may miss.

  12. Bio-accessibility and Risk of Exposure to Metals and SVOCs in Artificial Turf Field Fill Materials and Fibers

    PubMed Central

    Pavilonis, Brian T.; Weisel, Clifford P.; Buckley, Brian; Lioy, Paul J.

    2014-01-01

    To reduce maintenance costs, municipalities and schools are starting to replace natural grass fields with a new generation synthetic turf. Unlike Astro-Turf, which was first introduced in the 1960’s, synthetic field turf provides more cushioning to athletes. Part of this cushioning comes from materials like crumb rubber infill, which is manufactured from recycled tires and may contain a variety of chemicals. The goal of this study was to evaluate potential exposures from playing on artificial turf fields and associated risks to trace metals, semivolatile organic compounds (SVOCs), and polycyclic aromatic hydrocarbons (PAHs) by examining typical artificial turf fibers (n=8), different types of infill (n=8), and samples from actual fields (n=7). Three artificial biofluids were prepared which included: lung, sweat, and digestive fluids. Artificial biofluids were hypothesized to yield a more representative estimation of dose than the levels obtained from total extraction methods. PAHs were routinely below the limit of detection across all three biofluids precluding completion of a meaningful risk assessment. No SVOCs were identified at quantifiable levels in any extracts based on a match of their mass spectrum to compounds that are regulated in soil. The metals were measurable but at concentrations for which human health risk was estimated to be low. The study demonstrated that for the products and fields we tested, exposure to infill and artificial turf was generally considered de minimus, with the possible exception of lead for some fields and materials. PMID:23758133

  13. Learning Efficient Spatial-Temporal Gait Features with Deep Learning for Human Identification.

    PubMed

    Liu, Wu; Zhang, Cheng; Ma, Huadong; Li, Shuangqun

    2018-02-06

    The integration of the latest breakthroughs in bioinformatics technology from one side and artificial intelligence from another side, enables remarkable advances in the fields of intelligent security guard computational biology, healthcare, and so on. Among them, biometrics based automatic human identification is one of the most fundamental and significant research topic. Human gait, which is a biometric features with the unique capability, has gained significant attentions as the remarkable characteristics of remote accessed, robust and security in the biometrics based human identification. However, the existed methods cannot well handle the indistinctive inter-class differences and large intra-class variations of human gait in real-world situation. In this paper, we have developed an efficient spatial-temporal gait features with deep learning for human identification. First of all, we proposed a gait energy image (GEI) based Siamese neural network to automatically extract robust and discriminative spatial gait features for human identification. Furthermore, we exploit the deep 3-dimensional convolutional networks to learn the human gait convolutional 3D (C3D) as the temporal gait features. Finally, the GEI and C3D gait features are embedded into the null space by the Null Foley-Sammon Transform (NFST). In the new space, the spatial-temporal features are sufficiently combined with distance metric learning to drive the similarity metric to be small for pairs of gait from the same person, and large for pairs from different persons. Consequently, the experiments on the world's largest gait database show our framework impressively outperforms state-of-the-art methods.

  14. Evaluating suitability of Pol-SAR (TerraSAR-X, Radarsat-2) for automated sea ice classification

    NASA Astrophysics Data System (ADS)

    Ressel, Rudolf; Singha, Suman; Lehner, Susanne

    2016-05-01

    Satellite borne SAR imagery has become an invaluable tool in the field of sea ice monitoring. Previously, single polarimetric imagery were employed in supervised and unsupervised classification schemes for sea ice investigation, which was preceded by image processing techniques such as segmentation and textural features. Recently, through the advent of polarimetric SAR sensors, investigation of polarimetric features in sea ice has attracted increased attention. While dual-polarimetric data has already been investigated in a number of works, full-polarimetric data has so far not been a major scientific focus. To explore the possibilities of full-polarimetric data and compare the differences in C- and X-bands, we endeavor to analyze in detail an array of datasets, simultaneously acquired, in C-band (RADARSAT-2) and X-band (TerraSAR-X) over ice infested areas. First, we propose an array of polarimetric features (Pauli and lexicographic based). Ancillary data from national ice services, SMOS data and expert judgement were utilized to identify the governing ice regimes. Based on these observations, we then extracted mentioned features. The subsequent supervised classification approach was based on an Artificial Neural Network (ANN). To gain quantitative insight into the quality of the features themselves (and reduce a possible impact of the Hughes phenomenon), we employed mutual information to unearth the relevance and redundancy of features. The results of this information theoretic analysis guided a pruning process regarding the optimal subset of features. In the last step we compared the classified results of all sensors and images, stated respective accuracies and discussed output discrepancies in the cases of simultaneous acquisitions.

  15. A comparison between the effects of artificial land cover and anthropogenic heat on a localized heavy rain event in 2008 in Zoshigaya, Tokyo, Japan

    NASA Astrophysics Data System (ADS)

    Souma, Kazuyoshi; Tanaka, Kenji; Suetsugi, Tadashi; Sunada, Kengo; Tsuboki, Kazuhisa; Shinoda, Taro; Wang, Yuqing; Sakakibara, Atsushi; Hasegawa, Koichi; Moteki, Qoosaku; Nakakita, Eiichi

    2013-10-01

    5 August 2008, a localized heavy rainfall event caused a rapid increase in drainpipe discharge, which killed five people working in a drainpipe near Zoshigaya, Tokyo. This study compared the effects of artificial land cover and anthropogenic heat on this localized heavy rainfall event based on three ensemble experiments using a cloud-resolving model that includes realistic urban features. The first experiment CTRL (control) considered realistic land cover and urban features, including artificial land cover, anthropogenic heat, and urban geometry. In the second experiment NOAH (no anthropogenic heat), anthropogenic heat was ignored. In the third experiment NOLC (no land cover), urban heating from artificial land cover was reduced by keeping the urban geometry but with roofs, walls, and roads of artificial land cover replaced by shallow water. The results indicated that both anthropogenic heat and artificial land cover increased the amount of precipitation and that the effect of artificial land cover was larger than that of anthropogenic heat. However, in the middle stage of the precipitation event, the difference between the two effects became small. Weak surface heating in NOAH and NOLC reduced the near-surface air temperature and weakened the convergence of horizontal wind and updraft over the urban areas, resulting in a reduced rainfall amount compared with that in CTRL.

  16. Improving GLOBALlAND30 Artificial Type Extraction Accuracy in Low-Density Residents

    NASA Astrophysics Data System (ADS)

    Hou, Lili; Zhu, Ling; Peng, Shu; Xie, Zhenlei; Chen, Xu

    2016-06-01

    GlobalLand 30 is the first 30m resolution land cover product in the world. It covers the area within 80°N and 80°S. There are ten classes including artificial cover, water bodies, woodland, lawn, bare land, cultivated land, wetland, sea area, shrub and snow,. The TM imagery from Landsat is the main data source of GlobalLand 30. In the artificial surface type, one of the omission error happened on low-density residents' part. In TM images, hash distribution is one of the typical characteristics of the low-density residents, and another one is there are a lot of cultivated lands surrounded the low-density residents. Thus made the low-density residents part being blurred with cultivated land. In order to solve this problem, nighttime light remote sensing image is used as a referenced data, and on the basis of NDBI, we add TM6 to calculate the amount of surface thermal radiation index TR-NDBI (Thermal Radiation Normalized Difference Building Index) to achieve the purpose of extracting low-density residents. The result shows that using TR-NDBI and the nighttime light remote sensing image are a feasible and effective method for extracting low-density residents' areas.

  17. A framework for feature extraction from hospital medical data with applications in risk prediction.

    PubMed

    Tran, Truyen; Luo, Wei; Phung, Dinh; Gupta, Sunil; Rana, Santu; Kennedy, Richard Lee; Larkins, Ann; Venkatesh, Svetha

    2014-12-30

    Feature engineering is a time consuming component of predictive modeling. We propose a versatile platform to automatically extract features for risk prediction, based on a pre-defined and extensible entity schema. The extraction is independent of disease type or risk prediction task. We contrast auto-extracted features to baselines generated from the Elixhauser comorbidities. Hospital medical records was transformed to event sequences, to which filters were applied to extract feature sets capturing diversity in temporal scales and data types. The features were evaluated on a readmission prediction task, comparing with baseline feature sets generated from the Elixhauser comorbidities. The prediction model was through logistic regression with elastic net regularization. Predictions horizons of 1, 2, 3, 6, 12 months were considered for four diverse diseases: diabetes, COPD, mental disorders and pneumonia, with derivation and validation cohorts defined on non-overlapping data-collection periods. For unplanned readmissions, auto-extracted feature set using socio-demographic information and medical records, outperformed baselines derived from the socio-demographic information and Elixhauser comorbidities, over 20 settings (5 prediction horizons over 4 diseases). In particular over 30-day prediction, the AUCs are: COPD-baseline: 0.60 (95% CI: 0.57, 0.63), auto-extracted: 0.67 (0.64, 0.70); diabetes-baseline: 0.60 (0.58, 0.63), auto-extracted: 0.67 (0.64, 0.69); mental disorders-baseline: 0.57 (0.54, 0.60), auto-extracted: 0.69 (0.64,0.70); pneumonia-baseline: 0.61 (0.59, 0.63), auto-extracted: 0.70 (0.67, 0.72). The advantages of auto-extracted standard features from complex medical records, in a disease and task agnostic manner were demonstrated. Auto-extracted features have good predictive power over multiple time horizons. Such feature sets have potential to form the foundation of complex automated analytic tasks.

  18. Optimization of microwave-assisted extraction of total extract, stevioside and rebaudioside-A from Stevia rebaudiana (Bertoni) leaves, using response surface methodology (RSM) and artificial neural network (ANN) modelling.

    PubMed

    Ameer, Kashif; Bae, Seong-Woo; Jo, Yunhee; Lee, Hyun-Gyu; Ameer, Asif; Kwon, Joong-Ho

    2017-08-15

    Stevia rebaudiana (Bertoni) consists of stevioside and rebaudioside-A (Reb-A). We compared response surface methodology (RSM) and artificial neural network (ANN) modelling for their estimation and predictive capabilities in building effective models with maximum responses. A 5-level 3-factor central composite design was used to optimize microwave-assisted extraction (MAE) to obtain maximum yield of target responses as a function of extraction time (X 1 : 1-5min), ethanol concentration, (X 2 : 0-100%) and microwave power (X 3 : 40-200W). Maximum values of the three output parameters: 7.67% total extract yield, 19.58mg/g stevioside yield, and 15.3mg/g Reb-A yield, were obtained under optimum extraction conditions of 4min X 1 , 75% X 2 , and 160W X 3 . The ANN model demonstrated higher efficiency than did the RSM model. Hence, RSM can demonstrate interaction effects of inherent MAE parameters on target responses, whereas ANN can reliably model the MAE process with better predictive and estimation capabilities. Copyright © 2017. Published by Elsevier Ltd.

  19. Comparative analysis of feature extraction methods in satellite imagery

    NASA Astrophysics Data System (ADS)

    Karim, Shahid; Zhang, Ye; Asif, Muhammad Rizwan; Ali, Saad

    2017-10-01

    Feature extraction techniques are extensively being used in satellite imagery and getting impressive attention for remote sensing applications. The state-of-the-art feature extraction methods are appropriate according to the categories and structures of the objects to be detected. Based on distinctive computations of each feature extraction method, different types of images are selected to evaluate the performance of the methods, such as binary robust invariant scalable keypoints (BRISK), scale-invariant feature transform, speeded-up robust features (SURF), features from accelerated segment test (FAST), histogram of oriented gradients, and local binary patterns. Total computational time is calculated to evaluate the speed of each feature extraction method. The extracted features are counted under shadow regions and preprocessed shadow regions to compare the functioning of each method. We have studied the combination of SURF with FAST and BRISK individually and found very promising results with an increased number of features and less computational time. Finally, feature matching is conferred for all methods.

  20. Analysis and occurrence of seven artificial sweeteners in German waste water and surface water and in soil aquifer treatment (SAT).

    PubMed

    Scheurer, Marco; Brauch, Heinz-J; Lange, Frank T

    2009-07-01

    A method for the simultaneous determination of seven commonly used artificial sweeteners in water is presented. The analytes were extracted by solid phase extraction using Bakerbond SDB 1 cartridges at pH 3 and analyzed by liquid chromatography electrospray ionization tandem mass spectrometry in negative ionization mode. Ionization was enhanced by post-column addition of the alkaline modifier Tris(hydroxymethyl)amino methane. Except for aspartame and neohesperidin dihydrochalcone, recoveries were higher than 75% in potable water with comparable results for surface water. Matrix effects due to reduced extraction yields in undiluted waste water were negligible for aspartame and neotame but considerable for the other compounds. The widespread distribution of acesulfame, saccharin, cyclamate, and sucralose in the aquatic environment could be proven. Concentrations in two influents of German sewage treatment plants (STPs) were up to 190 microg/L for cyclamate, about 40 microg/L for acesulfame and saccharin, and less than 1 microg/L for sucralose. Removal in the STPs was limited for acesulfame and sucralose and >94% for saccharin and cyclamate. The persistence of some artificial sweeteners during soil aquifer treatment was demonstrated and confirmed their environmental relevance. The use of sucralose and acesulfame as tracers for anthropogenic contamination is conceivable. In German surface waters, acesulfame was the predominant artificial sweetener with concentrations exceeding 2 microg/L. Other sweeteners were detected up to several hundred nanograms per liter in the order saccharin approximately cyclamate > sucralose.

  1. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  2. Hybrid ANN optimized artificial fish swarm algorithm based classifier for classification of suspicious lesions in breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Janaki Sathya, D.; Geetha, K.

    2017-12-01

    Automatic mass or lesion classification systems are developed to aid in distinguishing between malignant and benign lesions present in the breast DCE-MR images, the systems need to improve both the sensitivity and specificity of DCE-MR image interpretation in order to be successful for clinical use. A new classifier (a set of features together with a classification method) based on artificial neural networks trained using artificial fish swarm optimization (AFSO) algorithm is proposed in this paper. The basic idea behind the proposed classifier is to use AFSO algorithm for searching the best combination of synaptic weights for the neural network. An optimal set of features based on the statistical textural features is presented. The investigational outcomes of the proposed suspicious lesion classifier algorithm therefore confirm that the resulting classifier performs better than other such classifiers reported in the literature. Therefore this classifier demonstrates that the improvement in both the sensitivity and specificity are possible through automated image analysis.

  3. Identification and classification of similar looking food grains

    NASA Astrophysics Data System (ADS)

    Anami, B. S.; Biradar, Sunanda D.; Savakar, D. G.; Kulkarni, P. V.

    2013-01-01

    This paper describes the comparative study of Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers by taking a case study of identification and classification of four pairs of similar looking food grains namely, Finger Millet, Mustard, Soyabean, Pigeon Pea, Aniseed, Cumin-seeds, Split Greengram and Split Blackgram. Algorithms are developed to acquire and process color images of these grains samples. The developed algorithms are used to extract 18 colors-Hue Saturation Value (HSV), and 42 wavelet based texture features. Back Propagation Neural Network (BPNN)-based classifier is designed using three feature sets namely color - HSV, wavelet-texture and their combined model. SVM model for color- HSV model is designed for the same set of samples. The classification accuracies ranging from 93% to 96% for color-HSV, ranging from 78% to 94% for wavelet texture model and from 92% to 97% for combined model are obtained for ANN based models. The classification accuracy ranging from 80% to 90% is obtained for color-HSV based SVM model. Training time required for the SVM based model is substantially lesser than ANN for the same set of images.

  4. [Methods of artificial intelligence: a new trend in pharmacy].

    PubMed

    Dohnal, V; Kuca, K; Jun, D

    2005-07-01

    Artificial neural networks (ANN) and genetic algorithms are one group of methods called artificial intelligence. The application of ANN on pharmaceutical data can lead to an understanding of the inner structure of data and a possibility to build a model (adaptation). In addition, for certain cases it is possible to extract rules from data. The adapted ANN is prepared for the prediction of properties of compounds which were not used in the adaptation phase. The applications of ANN have great potential in pharmaceutical industry and in the interpretation of analytical, pharmacokinetic or toxicological data.

  5. How to Fabricate Functional Artificial Luciferases for Bioassays.

    PubMed

    Kim, Sung-Bae; Fujii, Rika

    2016-01-01

    The present protocol introduces fabrication of artificial luciferases (ALuc(®)) by extracting the consensus amino acids from the alignment of copepod luciferase sequences. The made ALucs have unique sequential identities that are phylogenetically distinctive from those of any existing copepod luciferase. Some ALucs exhibited heat stability, and strong and greatly prolonged optical intensities. The made ALucs are applicable to various bioassays as an optical readout, including live cell imaging, single-chain probes, and bioluminescent tags of antibodies. The present protocol guides on how to fabricate a unique artificial luciferase with designed optical properties and functionalities.

  6. Artificial intelligence in cardiology.

    PubMed

    Bonderman, Diana

    2017-12-01

    Decision-making is complex in modern medicine and should ideally be based on available data, structured knowledge and proper interpretation in the context of an individual patient. Automated algorithms, also termed artificial intelligence that are able to extract meaningful patterns from data collections and build decisions upon identified patterns may be useful assistants in clinical decision-making processes. In this article, artificial intelligence-based studies in clinical cardiology are reviewed. The text also touches on the ethical issues and speculates on the future roles of automated algorithms versus clinicians in cardiology and medicine in general.

  7. Non-invasive classification of gas-liquid two-phase horizontal flow regimes using an ultrasonic Doppler sensor and a neural network

    NASA Astrophysics Data System (ADS)

    Musa Abbagoni, Baba; Yeung, Hoi

    2016-08-01

    The identification of flow pattern is a key issue in multiphase flow which is encountered in the petrochemical industry. It is difficult to identify the gas-liquid flow regimes objectively with the gas-liquid two-phase flow. This paper presents the feasibility of a clamp-on instrument for an objective flow regime classification of two-phase flow using an ultrasonic Doppler sensor and an artificial neural network, which records and processes the ultrasonic signals reflected from the two-phase flow. Experimental data is obtained on a horizontal test rig with a total pipe length of 21 m and 5.08 cm internal diameter carrying air-water two-phase flow under slug, elongated bubble, stratified-wavy and, stratified flow regimes. Multilayer perceptron neural networks (MLPNNs) are used to develop the classification model. The classifier requires features as an input which is representative of the signals. Ultrasound signal features are extracted by applying both power spectral density (PSD) and discrete wavelet transform (DWT) methods to the flow signals. A classification scheme of ‘1-of-C coding method for classification’ was adopted to classify features extracted into one of four flow regime categories. To improve the performance of the flow regime classifier network, a second level neural network was incorporated by using the output of a first level networks feature as an input feature. The addition of the two network models provided a combined neural network model which has achieved a higher accuracy than single neural network models. Classification accuracies are evaluated in the form of both the PSD and DWT features. The success rates of the two models are: (1) using PSD features, the classifier missed 3 datasets out of 24 test datasets of the classification and scored 87.5% accuracy; (2) with the DWT features, the network misclassified only one data point and it was able to classify the flow patterns up to 95.8% accuracy. This approach has demonstrated the success of a clamp-on ultrasound sensor for flow regime classification that would be possible in industry practice. It is considerably more promising than other techniques as it uses a non-invasive and non-radioactive sensor.

  8. Third International Symposium on Artificial Intelligence, Robotics, and Automation for Space 1994

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Third International Symposium on Artificial Intelligence, Robotics, and Automation for Space (i-SAIRAS 94), held October 18-20, 1994, in Pasadena, California, was jointly sponsored by NASA, ESA, and Japan's National Space Development Agency, and was hosted by the Jet Propulsion Laboratory (JPL) of the California Institute of Technology. i-SAIRAS 94 featured presentations covering a variety of technical and programmatic topics, ranging from underlying basic technology to specific applications of artificial intelligence and robotics to space missions. i-SAIRAS 94 featured a special workshop on planning and scheduling and provided scientists, engineers, and managers with the opportunity to exchange theoretical ideas, practical results, and program plans in such areas as space mission control, space vehicle processing, data analysis, autonomous spacecraft, space robots and rovers, satellite servicing, and intelligent instruments.

  9. ECG Identification System Using Neural Network with Global and Local Features

    ERIC Educational Resources Information Center

    Tseng, Kuo-Kun; Lee, Dachao; Chen, Charles

    2016-01-01

    This paper proposes a human identification system via extracted electrocardiogram (ECG) signals. Two hierarchical classification structures based on global shape feature and local statistical feature is used to extract ECG signals. Global shape feature represents the outline information of ECG signals and local statistical feature extracts the…

  10. Multi-Excitonic Quantum Dot Molecules

    NASA Astrophysics Data System (ADS)

    Scheibner, M.; Stinaff, E. A.; Doty, M. F.; Ware, M. E.; Bracker, A. S.; Gammon, D.; Ponomarev, I. V.; Reinecke, T. L.; Korenev, V. L.

    2006-03-01

    With the ability to create coupled pairs of quantum dots, the next step towards the realization of semiconductor based quantum information processing devices can be taken. However, so far little knowledge has been gained on these artificial molecules. Our photoluminescence experiments on single InAs/GaAs quantum dot molecules provide the systematics of coupled quantum dots by delineating the spectroscopic features of several key charge configurations in such quantum systems, including X, X^+,X^2+, XX, XX^+ (with X being the neutral exciton). We extract general rules which determine the formation of molecular states of coupled quantum dots. These include the fact that quantum dot molecules provide the possibility to realize various spin configurations and to switch the electron hole exchange interaction on and off by shifting charges inside the molecule. This knowledge will be valuable in developing implementations for quantum information processing.

  11. Emergence of Scale-Free Leadership Structure in Social Recommender Systems

    PubMed Central

    Zhou, Tao; Medo, Matúš; Cimini, Giulio; Zhang, Zi-Ke; Zhang, Yi-Cheng

    2011-01-01

    The study of the organization of social networks is important for the understanding of opinion formation, rumor spreading, and the emergence of trends and fashion. This paper reports empirical analysis of networks extracted from four leading sites with social functionality (Delicious, Flickr, Twitter and YouTube) and shows that they all display a scale-free leadership structure. To reproduce this feature, we propose an adaptive network model driven by social recommending. Artificial agent-based simulations of this model highlight a “good get richer” mechanism where users with broad interests and good judgments are likely to become popular leaders for the others. Simulations also indicate that the studied social recommendation mechanism can gradually improve the user experience by adapting to tastes of its users. Finally we outline implications for real online resource-sharing systems. PMID:21857891

  12. Secure VM for Monitoring Industrial Process Controllers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Dipankar; Ali, Mohammad Hassan; Abercrombie, Robert K

    2011-01-01

    In this paper, we examine the biological immune system as an autonomic system for self-protection, which has evolved over millions of years probably through extensive redesigning, testing, tuning and optimization process. The powerful information processing capabilities of the immune system, such as feature extraction, pattern recognition, learning, memory, and its distributive nature provide rich metaphors for its artificial counterpart. Our study focuses on building an autonomic defense system, using some immunological metaphors for information gathering, analyzing, decision making and launching threat and attack responses. In order to detection Stuxnet like malware, we propose to include a secure VM (or dedicatedmore » host) to the SCADA Network to monitor behavior and all software updates. This on-going research effort is not to mimic the nature but to explore and learn valuable lessons useful for self-adaptive cyber defense systems.« less

  13. Artificial intelligence approaches to astronomical observation scheduling

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Miller, Glenn

    1988-01-01

    Automated scheduling will play an increasing role in future ground- and space-based observatory operations. Due to the complexity of the problem, artificial intelligence technology currently offers the greatest potential for the development of scheduling tools with sufficient power and flexibility to handle realistic scheduling situations. Summarized here are the main features of the observatory scheduling problem, how artificial intelligence (AI) techniques can be applied, and recent progress in AI scheduling for Hubble Space Telescope.

  14. Quantitative Hyperspectral Reflectance Imaging

    PubMed Central

    Klein, Marvin E.; Aalderink, Bernard J.; Padoan, Roberto; de Bruin, Gerrit; Steemers, Ted A.G.

    2008-01-01

    Hyperspectral imaging is a non-destructive optical analysis technique that can for instance be used to obtain information from cultural heritage objects unavailable with conventional colour or multi-spectral photography. This technique can be used to distinguish and recognize materials, to enhance the visibility of faint or obscured features, to detect signs of degradation and study the effect of environmental conditions on the object. We describe the basic concept, working principles, construction and performance of a laboratory instrument specifically developed for the analysis of historical documents. The instrument measures calibrated spectral reflectance images at 70 wavelengths ranging from 365 to 1100 nm (near-ultraviolet, visible and near-infrared). By using a wavelength tunable narrow-bandwidth light-source, the light energy used to illuminate the measured object is minimal, so that any light-induced degradation can be excluded. Basic analysis of the hyperspectral data includes a qualitative comparison of the spectral images and the extraction of quantitative data such as mean spectral reflectance curves and statistical information from user-defined regions-of-interest. More sophisticated mathematical feature extraction and classification techniques can be used to map areas on the document, where different types of ink had been applied or where one ink shows various degrees of degradation. The developed quantitative hyperspectral imager is currently in use by the Nationaal Archief (National Archives of The Netherlands) to study degradation effects of artificial samples and original documents, exposed in their permanent exhibition area or stored in their deposit rooms. PMID:27873831

  15. Biometric analysis of the palm vein distribution by means two different techniques of feature extraction

    NASA Astrophysics Data System (ADS)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Solís-Villarreal, J.; Padilla-Vivanco, A.; Castro-Ramos, J.

    2014-09-01

    Vein patterns can be used for accessing, identifying, and authenticating purposes; which are more reliable than classical identification way. Furthermore, these patterns can be used for venipuncture in health fields to get on to veins of patients when they cannot be seen with the naked eye. In this paper, an image acquisition system is implemented in order to acquire digital images of people hands in the near infrared. The image acquisition system consists of a CCD camera and a light source with peak emission in the 880 nm. This radiation can penetrate and can be strongly absorbed by the desoxyhemoglobin that is presented in the blood of the veins. Our method of analysis is composed by several steps and the first one of all is the enhancement of acquired images which is implemented by spatial filters. After that, adaptive thresholding and mathematical morphology operations are used in order to obtain the distribution of vein patterns. The above process is focused on the people recognition through of images of their palm-dorsal distributions obtained from the near infrared light. This work has been directed for doing a comparison of two different techniques of feature extraction as moments and veincode. The classification task is achieved using Artificial Neural Networks. Two databases are used for the analysis of the performance of the algorithms. The first database used here is owned of the Hong Kong Polytechnic University and the second one is our own database.

  16. Conjunction of wavelet transform and SOM-mutual information data pre-processing approach for AI-based Multi-Station nitrate modeling of watersheds

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Andalib, Gholamreza; Dąbrowska, Dominika

    2017-05-01

    Accurate nitrate load predictions can elevate decision management of water quality of watersheds which affects to environment and drinking water. In this paper, two scenarios were considered for Multi-Station (MS) nitrate load modeling of the Little River watershed. In the first scenario, Markovian characteristics of streamflow-nitrate time series were proposed for the MS modeling. For this purpose, feature extraction criterion of Mutual Information (MI) was employed for input selection of artificial intelligence models (Feed Forward Neural Network, FFNN and least square support vector machine). In the second scenario for considering seasonality-based characteristics of the time series, wavelet transform was used to extract multi-scale features of streamflow-nitrate time series of the watershed's sub-basins to model MS nitrate loads. Self-Organizing Map (SOM) clustering technique which finds homogeneous sub-series clusters was also linked to MI for proper cluster agent choice to be imposed into the models for predicting the nitrate loads of the watershed's sub-basins. The proposed MS method not only considers the prediction of the outlet nitrate but also covers predictions of interior sub-basins nitrate load values. The results indicated that the proposed FFNN model coupled with the SOM-MI improved the performance of MS nitrate predictions compared to the Markovian-based models up to 39%. Overall, accurate selection of dominant inputs which consider seasonality-based characteristics of streamflow-nitrate process could enhance the efficiency of nitrate load predictions.

  17. Artificial intelligence expert systems with neural network machine learning may assist decision-making for extractions in orthodontic treatment planning.

    PubMed

    Takada, Kenji

    2016-09-01

    New approach for the diagnosis of extractions with neural network machine learning. Seok-Ki Jung and Tae-Woo Kim. Am J Orthod Dentofacial Orthop 2016;149:127-33. Not reported. Mathematical modeling. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. [Bare Soil Moisture Inversion Model Based on Visible-Shortwave Infrared Reflectance].

    PubMed

    Zheng, Xiao-po; Sun, Yue-jun; Qin, Qi-ming; Ren, Hua-zhong; Gao, Zhong-ling; Wu, Ling; Meng, Qing-ye; Wang, Jin-liang; Wang, Jian-hua

    2015-08-01

    Soil is the loose solum of land surface that can support plants. It consists of minerals, organics, atmosphere, moisture, microbes, et al. Among its complex compositions, soil moisture varies greatly. Therefore, the fast and accurate inversion of soil moisture by using remote sensing is very crucial. In order to reduce the influence of soil type on the retrieval of soil moisture, this paper proposed a normalized spectral slope and absorption index named NSSAI to estimate soil moisture. The modeling of the new index contains several key steps: Firstly, soil samples with different moisture level were artificially prepared, and soil reflectance spectra was consequently measured using spectroradiometer produced by ASD Company. Secondly, the moisture absorption spectral feature located at shortwave wavelengths and the spectral slope of visible wavelengths were calculated after analyzing the regular spectral feature change patterns of different soil at different moisture conditions. Then advantages of the two features at reducing soil types' effects was synthesized to build the NSSAI. Thirdly, a linear relationship between NSSAI and soil moisture was established. The result showed that NSSAI worked better (correlation coefficient is 0.93) than most of other traditional methods in soil moisture extraction. It can weaken the influences caused by soil types at different moisture levels and improve the bare soil moisture inversion accuracy.

  19. Bioaccessibility and Risk of Exposure to Metals and SVOCs in Artificial Turf Field Fill Materials and Fibers.

    PubMed

    Pavilonis, Brian T; Weisel, Clifford P; Buckley, Brian; Lioy, Paul J

    2014-01-01

    To reduce maintenance costs, municipalities and schools are starting to replace natural grass fields with a new generation synthetic turf. Unlike Astro-Turf, which was first introduced in the 1960s, synthetic field turf provides more cushioning to athletes. Part of this cushioning comes from materials like crumb rubber infill, which is manufactured from recycled tires and may contain a variety of chemicals. The goal of this study was to evaluate potential exposures from playing on artificial turf fields and associated risks to trace metals, semi-volatile organic compounds (SVOCs), and polycyclic aromatic hydrocarbons (PAHs) by examining typical artificial turf fibers (n = 8), different types of infill (n = 8), and samples from actual fields (n = 7). Three artificial biofluids were prepared, which included: lung, sweat, and digestive fluids. Artificial biofluids were hypothesized to yield a more representative estimation of dose than the levels obtained from total extraction methods. PAHs were routinely below the limit of detection across all three biofluids, precluding completion of a meaningful risk assessment. No SVOCs were identified at quantifiable levels in any extracts based on a match of their mass spectrum to compounds that are regulated in soil. The metals were measurable but at concentrations for which human health risk was estimated to be low. The study demonstrated that for the products and fields we tested, exposure to infill and artificial turf was generally considered de minimus, with the possible exception of lead for some fields and materials. © 2013 Society for Risk Analysis.

  20. Machinery running state identification based on discriminant semi-supervised local tangent space alignment for feature fusion and extraction

    NASA Astrophysics Data System (ADS)

    Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua

    2017-04-01

    Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.

  1. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Xiaojia; Mao Qirong; Zhan Yongzhao

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions.more » The experiments show that this method can improve the recognition rate and the time of feature extraction.« less

  2. Optimization of ultrasound-assisted extraction of phenolic compounds from grapefruit (Citrus paradisi Macf.) leaves via D-optimal design and artificial neural network design with categorical and quantitative variables.

    PubMed

    Ciğeroğlu, Zeynep; Aras, Ömür; Pinto, Carlos A; Bayramoglu, Mahmut; Kırbaşlar, Ş İsmail; Lorenzo, José M; Barba, Francisco J; Saraiva, Jorge A; Şahin, Selin

    2018-03-06

    The extraction of phenolic compounds from grapefruit leaves assisted by ultrasound-assisted extraction (UAE) was optimized using response surface methodology (RSM) by means of D-optimal experimental design and artificial neural network (ANN). For this purpose, five numerical factors were selected: ethanol concentration (0-50%), extraction time (15-60 min), extraction temperature (25-50 °C), solid:liquid ratio (50-100 g L -1 ) and calorimetric energy density of ultrasound (0.25-0.50 kW L -1 ), whereas ultrasound probe horn diameter (13 or 19 mm) was chosen as categorical factor. The optimized experimental conditions yielded by RSM were: 10.80% for ethanol concentration; 58.52 min for extraction time; 30.37 °C for extraction temperature; 52.33 g L -1 for solid:liquid ratio; 0.457 kW L -1 for ultrasonic power density, with thick probe type. Under these conditions total phenolics content was found to be 19.04 mg gallic acid equivalents g -1 dried leaf. The same dataset was used to train multilayer feed-forward networks using different approaches via MATLAB, with ANN exhibiting superior performance to RSM (differences included categorical factor in one model and higher regression coefficients), while close values were obtained for the extraction variables under study, except for ethanol concentration and extraction time. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  3. A Method Based on Artificial Intelligence To Fully Automatize The Evaluation of Bovine Blastocyst Images.

    PubMed

    Rocha, José Celso; Passalia, Felipe José; Matos, Felipe Delestro; Takahashi, Maria Beatriz; Ciniciato, Diego de Souza; Maserati, Marc Peter; Alves, Mayra Fernanda; de Almeida, Tamie Guibu; Cardoso, Bruna Lopes; Basso, Andrea Cristina; Nogueira, Marcelo Fábio Gouveia

    2017-08-09

    Morphological analysis is the standard method of assessing embryo quality; however, its inherent subjectivity tends to generate discrepancies among evaluators. Using genetic algorithms and artificial neural networks (ANNs), we developed a new method for embryo analysis that is more robust and reliable than standard methods. Bovine blastocysts produced in vitro were classified as grade 1 (excellent or good), 2 (fair), or 3 (poor) by three experienced embryologists according to the International Embryo Technology Society (IETS) standard. The images (n = 482) were subjected to automatic feature extraction, and the results were used as input for a supervised learning process. One part of the dataset (15%) was used for a blind test posterior to the fitting, for which the system had an accuracy of 76.4%. Interestingly, when the same embryologists evaluated a sub-sample (10%) of the dataset, there was only 54.0% agreement with the standard (mode for grades). However, when using the ANN to assess this sub-sample, there was 87.5% agreement with the modal values obtained by the evaluators. The presented methodology is covered by National Institute of Industrial Property (INPI) and World Intellectual Property Organization (WIPO) patents and is currently undergoing a commercial evaluation of its feasibility.

  4. Image segmentation-based robust feature extraction for color image watermarking

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  5. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  6. Introduction to the Special Issue on Innovative Applications of Artificial Intelligence 2014

    DOE PAGES

    Stracuzzi, David J.; Gunning, David

    2015-09-28

    This issue features expanded versions of articles selected from the 2014 AAAI Conference on Innovative Applications of Artificial Intelligence held in Quebec City, Canada. We present a selection of four articles describing deployed applications plus two more articles that discuss work on emerging applications.

  7. Introduction to the Special Issue on Innovative Applications of Artificial Intelligence 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stracuzzi, David J.; Gunning, David

    This issue features expanded versions of articles selected from the 2014 AAAI Conference on Innovative Applications of Artificial Intelligence held in Quebec City, Canada. We present a selection of four articles describing deployed applications plus two more articles that discuss work on emerging applications.

  8. Predicting the Emplacement of Improvised Explosive Devices: An Innovative Solution

    ERIC Educational Resources Information Center

    Lerner, Warren D.

    2013-01-01

    In this quantitative correlational study, simulated data were employed to examine artificial-intelligence techniques or, more specifically, artificial neural networks, as they relate to the location prediction of improvised explosive devices (IEDs). An ANN model was developed to predict IED placement, based upon terrain features and objects…

  9. Identification and interpretation of patterns in rocket engine data: Artificial intelligence and neural network approaches

    NASA Technical Reports Server (NTRS)

    Ali, Moonis; Whitehead, Bruce; Gupta, Uday K.; Ferber, Harry

    1995-01-01

    This paper describes an expert system which is designed to perform automatic data analysis, identify anomalous events and determine the characteristic features of these events. We have employed both artificial intelligence and neural net approaches in the design of this expert system.

  10. Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction

    NASA Astrophysics Data System (ADS)

    Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab

    2017-11-01

    Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.

  11. Identification of cultivated land using remote sensing images based on object-oriented artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Li, Nan; Zhu, Xiufang

    2017-04-01

    Cultivated land resources is the key to ensure food security. Timely and accurate access to cultivated land information is conducive to a scientific planning of food production and management policies. The GaoFen 1 (GF-1) images have high spatial resolution and abundant texture information and thus can be used to identify fragmentized cultivated land. In this paper, an object-oriented artificial bee colony algorithm was proposed for extracting cultivated land from GF-1 images. Firstly, the GF-1 image was segmented by eCognition software and some samples from the segments were manually identified into 2 types (cultivated land and non-cultivated land). Secondly, the artificial bee colony (ABC) algorithm was used to search for classification rules based on the spectral and texture information extracted from the image objects. Finally, the extracted classification rules were used to identify the cultivated land area on the image. The experiment was carried out in Hongze area, Jiangsu Province using wide field-of-view sensor on the GF-1 satellite image. The total precision of classification result was 94.95%, and the precision of cultivated land was 92.85%. The results show that the object-oriented ABC algorithm can overcome the defect of insufficient spectral information in GF-1 images and obtain high precision in cultivated identification.

  12. The Development of Causal Categorization

    ERIC Educational Resources Information Center

    Hayes, Brett K.; Rehder, Bob

    2012-01-01

    Two experiments examined the impact of causal relations between features on categorization in 5- to 6-year-old children and adults. Participants learned artificial categories containing instances with causally related features and noncausal features. They then selected the most likely category member from a series of novel test pairs.…

  13. Combination of artificial neural network and genetic algorithm method for modeling of methylene blue adsorption onto wood sawdust from water samples.

    PubMed

    Khajeh, Mostafa; Sarafraz-Yazdi, Ali; Natavan, Zahra Bameri

    2016-03-01

    The aim of this research was to develop a low price and environmentally friendly adsorbent with abundant of source to remove methylene blue (MB) from water samples. Sawdust solid-phase extraction coupled with high-performance liquid chromatography was used for the extraction and determination of MB. In this study, an experimental data-based artificial neural network model is constructed to describe the performance of sawdust solid-phase extraction method for various operating conditions. The pH, time, amount of sawdust, and temperature were the input variables, while the percentage of extraction of MB was the output. The optimum operating condition was then determined by genetic algorithm method. The optimized conditions were obtained as follows: 11.5, 22.0 min, 0.3 g, and 26.0°C for pH of the solution, extraction time, amount of adsorbent, and temperature, respectively. Under these optimum conditions, the detection limit and relative standard deviation were 0.067 μg L(-1) and <2.4%, respectively. The Langmuir and Freundlich adsorption models were applied to describe the isotherm constant and for the removal and determination of MB from water samples. © The Author(s) 2013.

  14. Comparative anthelminthic efficacy and safety of Caesalpinia crista seed and piperazine adipate in chickens with artificially induced Ascaridia galli infection.

    PubMed

    Javed, I; Akhtar, M S; Rahman, Z U; Khaliq, T; Ahmad, M

    1994-01-01

    The antiascarid activity of Caesalpinia crista Linn. seeds, popularly known as Karanjwa, was evaluated in chickens of the Fumi breed, suffering from artificially induced Ascaridia galli infection. Eggs per gram (EPG) counts were determined in the droppings of chickens prior and after treatment with powdered C. crista at doses of 30, 40 and 50 mg/kg of body weight along with its extracts in water and methanol in amounts representing 50 mg/kg of crude powder. The crude drug at the dose rates of 40 and 50 mg/kg and its methanol extract induced a significant (P < 0.001) effect on post-treatment days 10 and 15 while the 30 mg/kg dose was efficacious (P < 0.05) on day 15 only. However, the aqueous extract did not show significant results. These results suggest that a 50 mg/kg dose of C. crista seed powder, its equivalent methanolic extract and piperazine (200 mg/kg) are equieffective in treating the ascarid infection of poultry. The crude C. crista powder appears to be potent and safer than its methanol extract on the basis of the side effects observed.

  15. Toxicity of seabird guano to sea urchin embryos and interaction with Cu and Pb.

    PubMed

    Rial, Diego; Santos-Echeandía, Juan; Álvarez-Salgado, Xosé Antón; Jordi, Antoni; Tovar-Sánchez, Antonio; Bellas, Juan

    2016-02-01

    Guano is an important source of marine-derived nutrients to seabird nesting areas. Seabirds usually present high levels of metals and other contaminants because the bioaccumulation processes and biotic depositions can increase the concentration of pollutants in the receiving environments. The objectives of this study were to investigate: the toxicity of seabird guano and the joint toxicity of guano, Cu and Pb by using the sea urchin embryo-larval bioassay. In a first experiment, aqueous extracts of guano were prepared at two loading rates (0.462 and 1.952 g L(-1)) and toxicity to sea-urchin embryos was tested. Toxicity was low and not dependent of the load of guano used (EC50 0.42 ± 0.03 g L(-1)). Trace metal concentrations were also low either in guano or in aqueous extracts of guano and the toxicity of extracts were apparently related to dissolved organic matter. In a second experiment, the toxicity of Cu-Pb mixtures in artificial seawater and in extracts of guano (at two loadings: 0.015 and 0.073 g L(-1)), was tested. According to individual fittings, Cu added to extracts of guano showed less toxicity than when dissolved in artificial seawater. The response surfaces obtained for mixtures of Cu and Pb in artificial seawater, and in 0.015 g L(-1) and 0.073 g L(-1) of guano, were better described by Independent Action model adapted to describe antagonism, than by the other proposed models. This implied accepting that EC50 for Cu and Pb increased with the load of guano and with a greater interaction for Cu than for Pb. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Uniform competency-based local feature extraction for remote sensing images

    NASA Astrophysics Data System (ADS)

    Sedaghat, Amin; Mohammadi, Nazila

    2018-01-01

    Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.

  17. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  18. An artificial elementary eye with optic flow detection and compositional properties.

    PubMed

    Pericet-Camara, Ramon; Dobrzynski, Michal K; Juston, Raphaël; Viollet, Stéphane; Leitel, Robert; Mallot, Hanspeter A; Floreano, Dario

    2015-08-06

    We describe a 2 mg artificial elementary eye whose structure and functionality is inspired by compound eye ommatidia. Its optical sensitivity and electronic architecture are sufficient to generate the required signals for the measurement of local optic flow vectors in multiple directions. Multiple elementary eyes can be assembled to create a compound vision system of desired shape and curvature spanning large fields of view. The system configurability is validated with the fabrication of a flexible linear array of artificial elementary eyes capable of extracting optic flow over multiple visual directions. © 2015 The Author(s).

  19. Distant touch hydrodynamic imaging with an artificial lateral line.

    PubMed

    Yang, Yingchen; Chen, Jack; Engel, Jonathan; Pandya, Saunvit; Chen, Nannan; Tucker, Craig; Coombs, Sheryl; Jones, Douglas L; Liu, Chang

    2006-12-12

    Nearly all underwater vehicles and surface ships today use sonar and vision for imaging and navigation. However, sonar and vision systems face various limitations, e.g., sonar blind zones, dark or murky environments, etc. Evolved over millions of years, fish use the lateral line, a distributed linear array of flow sensing organs, for underwater hydrodynamic imaging and information extraction. We demonstrate here a proof-of-concept artificial lateral line system. It enables a distant touch hydrodynamic imaging capability to critically augment sonar and vision systems. We show that the artificial lateral line can successfully perform dipole source localization and hydrodynamic wake detection. The development of the artificial lateral line is aimed at fundamentally enhancing human ability to detect, navigate, and survive in the underwater environment.

  20. Application of wavelet transformation and adaptive neighborhood based modified backpropagation (ANMBP) for classification of brain cancer

    NASA Astrophysics Data System (ADS)

    Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry

    2017-08-01

    This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.

  1. Sample-space-based feature extraction and class preserving projection for gene expression data.

    PubMed

    Wang, Wenjun

    2013-01-01

    In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.

  2. Low complexity feature extraction for classification of harmonic signals

    NASA Astrophysics Data System (ADS)

    William, Peter E.

    In this dissertation, feature extraction algorithms have been developed for extraction of characteristic features from harmonic signals. The common theme for all developed algorithms is the simplicity in generating a significant set of features directly from the time domain harmonic signal. The features are a time domain representation of the composite, yet sparse, harmonic signature in the spectral domain. The algorithms are adequate for low-power unattended sensors which perform sensing, feature extraction, and classification in a standalone scenario. The first algorithm generates the characteristic features using only the duration between successive zero-crossing intervals. The second algorithm estimates the harmonics' amplitudes of the harmonic structure employing a simplified least squares method without the need to estimate the true harmonic parameters of the source signal. The third algorithm, resulting from a collaborative effort with Daniel White at the DSP Lab, University of Nebraska-Lincoln, presents an analog front end approach that utilizes a multichannel analog projection and integration to extract the sparse spectral features from the analog time domain signal. Classification is performed using a multilayer feedforward neural network. Evaluation of the proposed feature extraction algorithms for classification through the processing of several acoustic and vibration data sets (including military vehicles and rotating electric machines) with comparison to spectral features shows that, for harmonic signals, time domain features are simpler to extract and provide equivalent or improved reliability over the spectral features in both the detection probabilities and false alarm rate.

  3. Quantitative 3-D Imaging, Segmentation and Feature Extraction of the Respiratory System in Small Mammals for Computational Biophysics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, Lynn L.; Trease, Harold E.; Fowler, John

    2007-03-15

    One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process.more » The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.« less

  4. Multi-Stage System for Automatic Target Recognition

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Lu, Thomas T.; Ye, David; Edens, Weston; Johnson, Oliver

    2010-01-01

    A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feedforward back-propagation neural network (NN) is then trained to classify each feature vector and to remove false positives. The system parameter optimizations process has been developed to adapt to various targets and datasets. The objective was to design an efficient computer vision system that can learn to detect multiple targets in large images with unknown backgrounds. Because the target size is small relative to the image size in this problem, there are many regions of the image that could potentially contain the target. A cursory analysis of every region can be computationally efficient, but may yield too many false positives. On the other hand, a detailed analysis of every region can yield better results, but may be computationally inefficient. The multi-stage ATR system was designed to achieve an optimal balance between accuracy and computational efficiency by incorporating both models. The detection stage first identifies potential ROIs where the target may be present by performing a fast Fourier domain OT-MACH filter-based correlation. Because threshold for this stage is chosen with the goal of detecting all true positives, a number of false positives are also detected as ROIs. The verification stage then transforms the regions of interest into feature space, and eliminates false positives using an artificial neural network classifier. The multi-stage system allows tuning the detection sensitivity and the identification specificity individually in each stage. It is easier to achieve optimized ATR operation based on its specific goal. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar and video image datasets.

  5. Research on oral test modeling based on multi-feature fusion

    NASA Astrophysics Data System (ADS)

    Shi, Yuliang; Tao, Yiyue; Lei, Jun

    2018-04-01

    In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.

  6. Some Features of Artificially Thickened Fully Developed Turbulent Boundary Layers with Zero Pressure Gradient

    NASA Technical Reports Server (NTRS)

    Klebanoff, P S; Diehl, Z W

    1952-01-01

    Report gives an account of an investigation conducted to determine the feasibility of artificially thickening a turbulent boundary layer on a flat plate. A description is given of several methods used to thicken artificially the boundary layer. It is shown that it is possible to do substantial thickening and obtain a fully developed turbulent boundary layer, which is free from any distortions introduced by the thickening process, and, as such, is a suitable medium for fundamental research.

  7. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    NASA Astrophysics Data System (ADS)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  8. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  9. A comparative study between nonlinear regression and artificial neural network approaches for modelling wild oat (Avena fatua) field emergence

    USDA-ARS?s Scientific Manuscript database

    Non-linear regression techniques are used widely to fit weed field emergence patterns to soil microclimatic indices using S-type functions. Artificial neural networks present interesting and alternative features for such modeling purposes. In this work, a univariate hydrothermal-time based Weibull m...

  10. Effect of Lactoferrin on Oral Biofilm Formation

    DTIC Science & Technology

    2009-10-01

    dental implant failures, denture stomatitis and oral yeast infections such as candidiasis. It is one of the most widely studied biofilm systems, yet...and Company, Sparks, MD) and incubated at 37C for 24 h. P. gingivalis was grown in trypticase soy broth– yeast extract supplemented with 0.05% cysteine...protein, was purchased from (Sigma). In the attachment assays, artificial saliva (1 g lemco (refined meat extract of very light colour), 2 g yeast extract

  11. Establishing a learning foundation in a dynamically changing world: Insights from artificial language work

    NASA Astrophysics Data System (ADS)

    Gonzales, Kalim

    It is argued that infants build a foundation for learning about the world through their incidental acquisition of the spatial and temporal regularities surrounding them. A challenge is that learning occurs across multiple contexts whose statistics can greatly differ. Two artificial language studies with 12-month-olds demonstrate that infants come prepared to parse statistics across contexts using the temporal and perceptual features that distinguish one context from another. These results suggest that infants can organize their statistical input with a wider range of features that typically considered. Possible attention, decision making, and memory mechanisms are discussed.

  12. Supervised pixel classification using a feature space derived from an artificial visual system

    NASA Technical Reports Server (NTRS)

    Baxter, Lisa C.; Coggins, James M.

    1991-01-01

    Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.

  13. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  14. Classification of Medical Datasets Using SVMs with Hybrid Evolutionary Algorithms Based on Endocrine-Based Particle Swarm Optimization and Artificial Bee Colony Algorithms.

    PubMed

    Lin, Kuan-Cheng; Hsieh, Yi-Hsiu

    2015-10-01

    The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features.

  15. SU-E-J-256: Predicting Metastasis-Free Survival of Rectal Cancer Patients Treated with Neoadjuvant Chemo-Radiotherapy by Data-Mining of CT Texture Features of Primary Lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Wang, J; Shen, L

    Purpose: The purpose of this study is to investigate the relationship between computed tomographic (CT) texture features of primary lesions and metastasis-free survival for rectal cancer patients; and to develop a datamining prediction model using texture features. Methods: A total of 220 rectal cancer patients treated with neoadjuvant chemo-radiotherapy (CRT) were enrolled in this study. All patients underwent CT scans before CRT. The primary lesions on the CT images were delineated by two experienced oncologists. The CT images were filtered by Laplacian of Gaussian (LoG) filters with different filter values (1.0–2.5: from fine to coarse). Both filtered and unfiltered imagesmore » were analyzed using Gray-level Co-occurrence Matrix (GLCM) texture analysis with different directions (transversal, sagittal, and coronal). Totally, 270 texture features with different species, directions and filter values were extracted. Texture features were examined with Student’s t-test for selecting predictive features. Principal Component Analysis (PCA) was performed upon the selected features to reduce the feature collinearity. Artificial neural network (ANN) and logistic regression were applied to establish metastasis prediction models. Results: Forty-six of 220 patients developed metastasis with a follow-up time of more than 2 years. Sixtyseven texture features were significantly different in t-test (p<0.05) between patients with and without metastasis, and 12 of them were extremely significant (p<0.001). The Area-under-the-curve (AUC) of ANN was 0.72, and the concordance index (CI) of logistic regression was 0.71. The predictability of ANN was slightly better than logistic regression. Conclusion: CT texture features of primary lesions are related to metastasisfree survival of rectal cancer patients. Both ANN and logistic regression based models can be developed for prediction.« less

  16. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    PubMed

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  17. PyEEG: an open source Python module for EEG/MEG feature extraction.

    PubMed

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  18. PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction

    PubMed Central

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction. PMID:21512582

  19. Deep feature extraction and combination for synthetic aperture radar target classification

    NASA Astrophysics Data System (ADS)

    Amrani, Moussa; Jiang, Feng

    2017-10-01

    Feature extraction has always been a difficult problem in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very important to select discriminative features to train a classifier, which is a prerequisite. Inspired by the great success of convolutional neural network (CNN), we address the problem of SAR target classification by proposing a feature extraction method, which takes advantage of exploiting the extracted deep features from CNNs on SAR images to introduce more powerful discriminative features and robust representation ability for them. First, the pretrained VGG-S net is fine-tuned on moving and stationary target acquisition and recognition (MSTAR) public release database. Second, after a simple preprocessing is performed, the fine-tuned network is used as a fixed feature extractor to extract deep features from the processed SAR images. Third, the extracted deep features are fused by using a traditional concatenation and a discriminant correlation analysis algorithm. Finally, for target classification, K-nearest neighbors algorithm based on LogDet divergence-based metric learning triplet constraints is adopted as a baseline classifier. Experiments on MSTAR are conducted, and the classification accuracy results demonstrate that the proposed method outperforms the state-of-the-art methods.

  20. Supervised non-negative tensor factorization for automatic hyperspectral feature extraction and target discrimination

    NASA Astrophysics Data System (ADS)

    Anderson, Dylan; Bapst, Aleksander; Coon, Joshua; Pung, Aaron; Kudenov, Michael

    2017-05-01

    Hyperspectral imaging provides a highly discriminative and powerful signature for target detection and discrimination. Recent literature has shown that considering additional target characteristics, such as spatial or temporal profiles, simultaneously with spectral content can greatly increase classifier performance. Considering these additional characteristics in a traditional discriminative algorithm requires a feature extraction step be performed first. An example of such a pipeline is computing a filter bank response to extract spatial features followed by a support vector machine (SVM) to discriminate between targets. This decoupling between feature extraction and target discrimination yields features that are suboptimal for discrimination, reducing performance. This performance reduction is especially pronounced when the number of features or available data is limited. In this paper, we propose the use of Supervised Nonnegative Tensor Factorization (SNTF) to jointly perform feature extraction and target discrimination over hyperspectral data products. SNTF learns a tensor factorization and a classification boundary from labeled training data simultaneously. This ensures that the features learned via tensor factorization are optimal for both summarizing the input data and separating the targets of interest. Practical considerations for applying SNTF to hyperspectral data are presented, and results from this framework are compared to decoupled feature extraction/target discrimination pipelines.

  1. Combination of counterpropagation artificial neural networks and antioxidant activities for comprehensive evaluation of associated-extraction efficiency of various cyclodextrins in the traditional Chinese formula Xue-Zhi-Ning.

    PubMed

    Sun, Lili; Yang, Jianwen; Wang, Meng; Zhang, Huijie; Liu, Yanan; Ren, Xiaoliang; Qi, Aidi

    2015-11-10

    Xue-Zhi-Ning (XZN) is a widely used traditional Chinese medicine formula to treat hyperlipidemia. Recently, cyclodextrins (CDs) have been extensively used to minimize problems relative to medicine bioavailability, such as low solubility and poor stability. The objective of this study was to determine the associated-extraction efficiency of various CDs in XZN. Three various type CDs were evaluated, including native CDs (α-CD, β-CD), hydrophilic CD derivatives (HP-β-CD and Me-β-CD), and ionic CD derivatives (SBE-β-CD and CM-β-CD). An ultra high-performance liquid chromatography (UHPLC) fingerprint was applied to determine the components in CD extracts and original aqueous extract (OAE). A counterpropagation artificial neural network (CP-ANN) was used to analyze the components in different extracts and compare the selective extraction of various CDs. Extraction efficiencies of the various CDs in terms of extracted components follow the ranking, ionic CD derivatives>hydrophilic CD derivatives>native CDs>OAE. Besides, different types of CDs have their own selective extraction and ionic CD derivatives present the strongest associated-extraction efficiency. Antioxidant potentials of various extracts were evaluated by determining the inhibition of spontaneous, H2O2-induced, CCl4-induced and Fe(2+)/ascorbic acid-induced lipid peroxidation (LPO) and analyzing the scavenging capacity for DPPH and hydroxyl radicals. The order of extraction efficiencies of the various CDs relative to antioxidant activities is as follows: SBE-β-CD>CM-β-CD>HP-β-CD>Me-β-CD>β-CD>α-CD. It can be demonstrated that all of the CDs studied increase the extraction efficiency and that ionic CD derivatives (SBE-β-CD and CM-β-CD) present the highest extraction capability in terms of amount extracted and antioxidant activities of extracts. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. [Identification of spill oil species based on low concentration synchronous fluorescence spectra and RBF neural network].

    PubMed

    Liu, Qian-qian; Wang, Chun-yan; Shi, Xiao-feng; Li, Wen-dong; Luan, Xiao-ning; Hou, Shi-lin; Zhang, Jin-liang; Zheng, Rong-er

    2012-04-01

    In this paper, a new method was developed to differentiate the spill oil samples. The synchronous fluorescence spectra in the lower nonlinear concentration range of 10(-2) - 10(-1) g x L(-1) were collected to get training data base. Radial basis function artificial neural network (RBF-ANN) was used to identify the samples sets, along with principal component analysis (PCA) as the feature extraction method. The recognition rate of the closely-related oil source samples is 92%. All the results demonstrated that the proposed method could identify the crude oil samples effectively by just one synchronous spectrum of the spill oil sample. The method was supposed to be very suitable to the real-time spill oil identification, and can also be easily applied to the oil logging and the analysis of other multi-PAHs or multi-fluorescent mixtures.

  3. Using Machine Learning To Predict Which Light Curves Will Yield Stellar Rotation Periods

    NASA Astrophysics Data System (ADS)

    Agüeros, Marcel; Teachey, Alexander

    2018-01-01

    Using time-domain photometry to reliably measure a solar-type star's rotation period requires that its light curve have a number of favorable characteristics. The probability of recovering a period will be a non-linear function of these light curve features, which are either astrophysical in nature or set by the observations. We employ standard machine learning algorithms (artificial neural networks and random forests) to predict whether a given light curve will produce a robust rotation period measurement from its Lomb-Scargle periodogram. The algorithms are trained and validated using salient statistics extracted from both simulated light curves and their corresponding periodograms, and we apply these classifiers to the most recent Intermediate Palomar Transient Factory (iPTF) data release. With this pipeline, we anticipate measuring rotation periods for a significant fraction of the ∼4x108 stars in the iPTF footprint.

  4. A modified active appearance model based on an adaptive artificial bee colony.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.

  5. Artificial intelligence systems based on texture descriptors for vaccine development.

    PubMed

    Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra

    2011-02-01

    The aim of this work is to analyze and compare several feature extraction methods for peptide classification that are based on the calculation of texture descriptors starting from a matrix representation of the peptide. This texture-based representation of the peptide is then used to train a support vector machine classifier. In our experiments, the best results are obtained using local binary patterns variants and the discrete cosine transform with selected coefficients. These results are better than those previously reported that employed texture descriptors for peptide representation. In addition, we perform experiments that combine standard approaches based on amino acid sequence. The experimental section reports several tests performed on a vaccine dataset for the prediction of peptides that bind human leukocyte antigens and on a human immunodeficiency virus (HIV-1). Experimental results confirm the usefulness of our novel descriptors. The matlab implementation of our approaches is available at http://bias.csr.unibo.it/nanni/TexturePeptide.zip.

  6. Towards better modelling of drug-loading in solid lipid nanoparticles: Molecular dynamics, docking experiments and Gaussian Processes machine learning.

    PubMed

    Hathout, Rania M; Metwally, Abdelkader A

    2016-11-01

    This study represents one of the series applying computer-oriented processes and tools in digging for information, analysing data and finally extracting correlations and meaningful outcomes. In this context, binding energies could be used to model and predict the mass of loaded drugs in solid lipid nanoparticles after molecular docking of literature-gathered drugs using MOE® software package on molecularly simulated tripalmitin matrices using GROMACS®. Consequently, Gaussian processes as a supervised machine learning artificial intelligence technique were used to correlate the drugs' descriptors (e.g. M.W., xLogP, TPSA and fragment complexity) with their molecular docking binding energies. Lower percentage bias was obtained compared to previous studies which allows the accurate estimation of the loaded mass of any drug in the investigated solid lipid nanoparticles by just projecting its chemical structure to its main features (descriptors). Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Measurement and modelization of silica opal reflection properties: Optical determination of the silica index

    NASA Astrophysics Data System (ADS)

    Avoine, Amaury; Hong, Phan Ngoc; Frederich, Hugo; Frigerio, Jean-Marc; Coolen, Laurent; Schwob, Catherine; Nga, Pham Thu; Gallas, Bruno; Maître, Agnès

    2012-10-01

    Self-assembled artificial opals (in particular silica opals) constitute a model system to study the optical properties of three-dimensional photonic crystals. The silica optical index is a key parameter to correctly describe an opal but is difficult to measure at the submicrometer scale and usually treated as a free parameter. Here, we propose a method to extract the silica index from the opal reflection spectra and we validate it by comparison with two independent methods based on infrared measurements. We show that this index gives a correct description of the opal reflection spectra, either by a band structure or by a Bragg approximation. In particular, we are able to provide explanations in quantitative agreement with the measurements for two features : the observation of a second reflection peak in specular direction, and the quasicollapse of the p-polarized main reflection peak at a typical angle of 54∘.

  8. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.

    PubMed

    Kang, Min-Joo; Kang, Je-Won

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.

  9. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security

    PubMed Central

    Kang, Min-Joo

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus. PMID:27271802

  10. Experimental Study on GFRP Surface Cracks Detection Using Truncated-Correlation Photothermal Coherence Tomography

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Liu, Junyan; Mohummad, Oliullah; Wang, Yang

    2018-04-01

    In this paper, truncated-correlation photothermal coherence tomography (TC-PCT) was used as a nondestructive inspection technique to evaluate glass-fiber reinforced polymer (GFRP) composite surface cracks. Chirped-pulsed signal that combines linear frequency modulation and pulse excitation was proposed as an excitation signal to detect GFRP composite surface cracks. The basic principle of TC-PCT and extraction algorithm of the thermal wave signal feature was described. The comparison experiments between lock-in thermography, thermal wave radar imaging and chirped-pulsed photothermal radar for detecting GFRP artificial surface cracks were carried out. Experimental results illustrated that chirped-pulsed photothermal radar has the merits of high signal-to-noise ratio in detecting GFRP composite surface cracks. TC-PCT as a depth-resolved photothermal imaging modality was employed to enable three-dimensional visualization of GFRP composite surface cracks. The results showed that TC-PCT can effectively evaluate the cracks depth of GFRP composite.

  11. Discontinuity Detection in the Shield Metal Arc Welding Process

    PubMed Central

    Cocota, José Alberto Naves; Garcia, Gabriel Carvalho; da Costa, Adilson Rodrigues; de Lima, Milton Sérgio Fernandes; Rocha, Filipe Augusto Santos; Freitas, Gustavo Medeiros

    2017-01-01

    This work proposes a new methodology for the detection of discontinuities in the weld bead applied in Shielded Metal Arc Welding (SMAW) processes. The detection system is based on two sensors—a microphone and piezoelectric—that acquire acoustic emissions generated during the welding. The feature vectors extracted from the sensor dataset are used to construct classifier models. The approaches based on Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers are able to identify with a high accuracy the three proposed weld bead classes: desirable weld bead, shrinkage cavity and burn through discontinuities. Experimental results illustrate the system’s high accuracy, greater than 90% for each class. A novel Hierarchical Support Vector Machine (HSVM) structure is proposed to make feasible the use of this system in industrial environments. This approach presented 96.6% overall accuracy. Given the simplicity of the equipment involved, this system can be applied in the metal transformation industries. PMID:28489045

  12. Computational Analysis of Behavior.

    PubMed

    Egnor, S E Roian; Branson, Kristin

    2016-07-08

    In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.

  13. Discontinuity Detection in the Shield Metal Arc Welding Process.

    PubMed

    Cocota, José Alberto Naves; Garcia, Gabriel Carvalho; da Costa, Adilson Rodrigues; de Lima, Milton Sérgio Fernandes; Rocha, Filipe Augusto Santos; Freitas, Gustavo Medeiros

    2017-05-10

    This work proposes a new methodology for the detection of discontinuities in the weld bead applied in Shielded Metal Arc Welding (SMAW) processes. The detection system is based on two sensors-a microphone and piezoelectric-that acquire acoustic emissions generated during the welding. The feature vectors extracted from the sensor dataset are used to construct classifier models. The approaches based on Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers are able to identify with a high accuracy the three proposed weld bead classes: desirable weld bead, shrinkage cavity and burn through discontinuities. Experimental results illustrate the system's high accuracy, greater than 90% for each class. A novel Hierarchical Support Vector Machine (HSVM) structure is proposed to make feasible the use of this system in industrial environments. This approach presented 96.6% overall accuracy. Given the simplicity of the equipment involved, this system can be applied in the metal transformation industries.

  14. Neural networks to classify speaker independent isolated words recorded in radio car environments

    NASA Astrophysics Data System (ADS)

    Alippi, C.; Simeoni, M.; Torri, V.

    1993-02-01

    Many applications, in particular the ones requiring nonlinear signal processing, have proved Artificial Neural Networks (ANN's) to be invaluable tools for model free estimation. The classifying abilities of ANN's are addressed by testing their performance in a speaker independent word recognition application. A real world case requiring implementation of compact integrated devices is taken into account: the classification of isolated words in radio car environment. A multispeaker database of isolated words was recorded in different environments. Data were first processed to determinate the boundaries of each word and then to extract speech features, the latter accomplished by using cepstral coefficient representation, log area ratios and filters bank techniques. Multilayered perceptron and adaptive vector quantization neural paradigms were tested to find a reasonable compromise between performances and network simplicity, fundamental requirement for the implementation of compact real time running neural devices.

  15. AGSuite: Software to conduct feature analysis of artificial grammar learning performance.

    PubMed

    Cook, Matthew T; Chubala, Chrissy M; Jamieson, Randall K

    2017-10-01

    To simplify the problem of studying how people learn natural language, researchers use the artificial grammar learning (AGL) task. In this task, participants study letter strings constructed according to the rules of an artificial grammar and subsequently attempt to discriminate grammatical from ungrammatical test strings. Although the data from these experiments are usually analyzed by comparing the mean discrimination performance between experimental conditions, this practice discards information about the individual items and participants that could otherwise help uncover the particular features of strings associated with grammaticality judgments. However, feature analysis is tedious to compute, often complicated, and ill-defined in the literature. Moreover, the data violate the assumption of independence underlying standard linear regression models, leading to Type I error inflation. To solve these problems, we present AGSuite, a free Shiny application for researchers studying AGL. The suite's intuitive Web-based user interface allows researchers to generate strings from a database of published grammars, compute feature measures (e.g., Levenshtein distance) for each letter string, and conduct a feature analysis on the strings using linear mixed effects (LME) analyses. The LME analysis solves the inflation of Type I errors that afflicts more common methods of repeated measures regression analysis. Finally, the software can generate a number of graphical representations of the data to support an accurate interpretation of results. We hope the ease and availability of these tools will encourage researchers to take full advantage of item-level variance in their datasets in the study of AGL. We moreover discuss the broader applicability of the tools for researchers looking to conduct feature analysis in any field.

  16. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    PubMed Central

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  17. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  18. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    PubMed

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  19. A comprehensive overview of the applications of artificial life.

    PubMed

    Kim, Kyung-Joong; Cho, Sung-Bae

    2006-01-01

    We review the applications of artificial life (ALife), the creation of synthetic life on computers to study, simulate, and understand living systems. The definition and features of ALife are shown by application studies. ALife application fields treated include robot control, robot manufacturing, practical robots, computer graphics, natural phenomenon modeling, entertainment, games, music, economics, Internet, information processing, industrial design, simulation software, electronics, security, data mining, and telecommunications. In order to show the status of ALife application research, this review primarily features a survey of about 180 ALife application articles rather than a selected representation of a few articles. Evolutionary computation is the most popular method for designing such applications, but recently swarm intelligence, artificial immune network, and agent-based modeling have also produced results. Applications were initially restricted to the robotics and computer graphics, but presently, many different applications in engineering areas are of interest.

  20. Insecticidal Activity of Chromobacterium subtsugae on the Sweet Potato Whitefly, Bemisia tabaci, Biotype B

    USDA-ARS?s Scientific Manuscript database

    Chromobacterium subtsugae crude extracts contain compounds that are toxic to nymphal and adult Bemisia tabaci. When fed on artificial diet containing 10% of the supernatant of an aqueous cell-free extract of C subtsugae, the number of 2nd and 4th instar nymphs and of emerged adults was significantl...

  1. Intelligence, Surveillance, and Reconnaissance Fusion for Coalition Operations

    DTIC Science & Technology

    2008-07-01

    classification of the targets of interest. The MMI features extracted in this manner have two properties that provide a sound justification for...are generalizations of well- known feature extraction methods such as Principal Components Analysis (PCA) and Independent Component Analysis (ICA...augment (without degrading performance) a large class of generic fusion processes. Ontologies Classifications Feature extraction Feature analysis

  2. Impact of Regulatory Interventions to Reduce Intake of Artificial Trans–Fatty Acids: A Systematic Review

    PubMed Central

    Almíron-Roig, Eva; Monsivais, Pablo; Jebb, Susan A.; Benjamin Neelon, Sara E.; Griffin, Simon J.; Ogilvie, David B.

    2015-01-01

    We examined the impact of regulatory action to reduce levels of artificial trans–fatty acids (TFAs) in food. We searched Medline, Embase, ISI Web of Knowledge, and EconLit (January 1980 to December 2012) for studies related to government regulation of food- or diet-related health behaviors from which we extracted the subsample of legislative initiatives to reduce artificial TFAs in food. We screened 38 162 articles and identified 14 studies that examined artificial TFA controls limiting permitted levels or mandating labeling. These measures achieved good compliance, with evidence of appropriate reformulation. Regulations grounded on maximum limits and mandated labeling can lead to reductions in actual and reported TFAs in food and appear to encourage food producers to reformulate their products. PMID:25602897

  3. Extraction of multi-scale landslide morphological features based on local Gi* using airborne LiDAR-derived DEM

    NASA Astrophysics Data System (ADS)

    Shi, Wenzhong; Deng, Susu; Xu, Wenbing

    2018-02-01

    For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (< 5 years) landslides and approximately 35% of historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should be filtered using a filtering strategy based on supplementary information provided by expert knowledge or other data sources.

  4. [Study on biocompatibility of hydroxyapatite/high density polyethylene (HA/HDPE) nano-composites artificial ossicle].

    PubMed

    Wang, Guohui; Zhu, Shaihong; Tan, Guolin; Zhou, Kechao; Huang, Suping; Zhao, Yanzhong; Li, Zhiyou; Huang, Boyun

    2008-06-01

    This study was aimed to evaluate the biocompatibility of Hydroxyapatite/High density polyethylene (HA/ HDPE) nano-composites artificial ossicle. The percentage of S-period cells were detected by flow cytometry after L929 cells being incubated with extraction of the HA/HDPE nano-composites; the titanium materials for clinical application served as the contrast. In addition, both materials were implanted in animals and the histopathological evaluations were conducted. There were no statistically significant differences between the two groups (P >0.05). The results demonstrated that the HA/HDPE nano-composite artificial ossicle made by our laboratory is of a good biocompatibility and clinical application outlook.

  5. A New Data Mining Scheme Using Artificial Neural Networks

    PubMed Central

    Kamruzzaman, S. M.; Jehad Sarkar, A. M.

    2011-01-01

    Classification is one of the data mining problems receiving enormous attention in the database community. Although artificial neural networks (ANNs) have been successfully applied in a wide range of machine learning applications, they are however often regarded as black boxes, i.e., their predictions cannot be explained. To enhance the explanation of ANNs, a novel algorithm to extract symbolic rules from ANNs has been proposed in this paper. ANN methods have not been effectively utilized for data mining tasks because how the classifications were made is not explicitly stated as symbolic rules that are suitable for verification or interpretation by human experts. With the proposed approach, concise symbolic rules with high accuracy, that are easily explainable, can be extracted from the trained ANNs. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and the accuracy. The effectiveness of the proposed approach is clearly demonstrated by the experimental results on a set of benchmark data mining classification problems. PMID:22163866

  6. Role of Electrostatic Interactions on the Transport of Druglike Molecules in Hydrogel-Based Articular Cartilage Mimics: Implications for Drug Delivery.

    PubMed

    Ye, Fengbin; Baldursdottir, Stefania; Hvidt, Søren; Jensen, Henrik; Larsen, Susan W; Yaghmur, Anan; Larsen, Claus; Østergaard, Jesper

    2016-03-07

    In the field of drug delivery to the articular cartilage, it is advantageous to apply artificial tissue models as surrogates of cartilage for investigating drug transport and release properties. In this study, artificial cartilage models consisting of 0.5% (w/v) agarose gel containing 0.5% (w/v) chondroitin sulfate or 0.5% (w/v) hyaluronic acid were developed, and their rheological and morphological properties were characterized. UV imaging was utilized to quantify the transport properties of the following four model compounds in the agarose gel and in the developed artificial cartilage models: H-Ala-β-naphthylamide, H-Lys-Lys-β-naphthylamide, lysozyme, and α-lactalbumin. The obtained results showed that the incorporation of the polyelectrolytes chondroitin sulfate or hyaluronic acid into agarose gel induced a significant reduction in the apparent diffusivities of the cationic model compounds as compared to the pure agarose gel. The decrease in apparent diffusivity of the cationic compounds was not caused by a change in the gel structure since a similar reduction in apparent diffusivity was not observed for the net negatively charged protein α-lactalbumin. The apparent diffusivity of the cationic compounds in the negatively charged hydrogels was highly dependent on the ionic strength, pointing out the importance of electrostatic interactions between the diffusant and the polyelectrolytes. Solution based affinity studies between the model compounds and the two investigated polyelectrolytes further confirmed the electrostatic nature of their interactions. The results obtained from the UV imaging diffusion studies are important for understanding the effect of drug physicochemical properties on the transport in articular cartilage. The extracted information may be useful in the development of hydrogels for in vitro release testing having features resembling the articular cartilage.

  7. A bio-inspired real-time capable artificial lateral line system for freestream flow measurements.

    PubMed

    Abels, C; Qualtieri, A; De Vittorio, M; Megill, W M; Rizzi, F

    2016-06-03

    To enhance today's artificial flow sensing capabilities in aerial and underwater robotics, future robots could be equipped with a large number of miniaturized sensors distributed over the surface to provide high resolution measurement of the surrounding fluid flow. In this work we show a linear array of closely separated bio-inspired micro-electro-mechanical flow sensors whose sensing mechanism is based on a piezoresistive strain-gauge along a stress-driven cantilever beam, mimicking the biological superficial neuromasts found in the lateral line organ of fishes. Aiming to improve state-of-the-art flow sensing capability in autonomously flying and swimming robots, our artificial lateral line system was designed and developed to feature multi-parameter freestream flow measurements which provide information about (1) local flow velocities as measured by the signal amplitudes from the individual cantilevers as well as (2) propagation velocity, (3) linear forward/backward direction along the cantilever beam orientation and (4) periodicity of pulses or pulse trains determined by cross-correlating sensor signals. A real-time capable cross-correlation procedure was developed which makes it possible to extract freestream flow direction and velocity information from flow fluctuations. The computed flow velocities deviate from a commercial system by 0.09 m s(-1) at 0.5 m s(-1) and 0.15 m s(-1) at 1.0 m s(-1) flow velocity for a sampling rate of 240 Hz and a sensor distance of 38 mm. Although experiments were performed in air, the presented flow sensing system can be applied to underwater vehicles as well, once the sensors are embedded in a waterproof micro-electro-mechanical systems package.

  8. Component spectra extraction from terahertz measurements of unknown mixtures.

    PubMed

    Li, Xian; Hou, D B; Huang, P J; Cai, J H; Zhang, G X

    2015-10-20

    The aim of this work is to extract component spectra from unknown mixtures in the terahertz region. To that end, a method, hard modeling factor analysis (HMFA), was applied to resolve terahertz spectral matrices collected from the unknown mixtures. This method does not require any expertise of the user and allows the consideration of nonlinear effects such as peak variations or peak shifts. It describes the spectra using a peak-based nonlinear mathematic model and builds the component spectra automatically by recombination of the resolved peaks through correlation analysis. Meanwhile, modifications on the method were made to take the features of terahertz spectra into account and to deal with the artificial baseline problem that troubles the extraction process of some terahertz spectra. In order to validate the proposed method, simulated wideband terahertz spectra of binary and ternary systems and experimental terahertz absorption spectra of amino acids mixtures were tested. In each test, not only the number of pure components could be correctly predicted but also the identified pure spectra had a good similarity with the true spectra. Moreover, the proposed method associated the molecular motions with the component extraction, making the identification process more physically meaningful and interpretable compared to other methods. The results indicate that the HMFA method with the modifications can be a practical tool for identifying component terahertz spectra in completely unknown mixtures. This work reports the solution to this kind of problem in the terahertz region for the first time, to the best of the authors' knowledge, and represents a significant advance toward exploring physical or chemical mechanisms of unknown complex systems by terahertz spectroscopy.

  9. Amino acid distribution in meteorites: diagenesis, extraction methods, and standard metrics in the search for extraterrestrial biosignatures.

    PubMed

    McDonald, Gene D; Storrie-Lombardi, Michael C

    2006-02-01

    The relative abundance of the protein amino acids has been previously investigated as a potential marker for biogenicity in meteoritic samples. However, these investigations were executed without a quantitative metric to evaluate distribution variations, and they did not account for the possibility of interdisciplinary systematic error arising from inter-laboratory differences in extraction and detection techniques. Principal component analysis (PCA), hierarchical cluster analysis (HCA), and stochastic probabilistic artificial neural networks (ANNs) were used to compare the distributions for nine protein amino acids previously reported for the Murchison carbonaceous chondrite, Mars meteorites (ALH84001, Nakhla, and EETA79001), prebiotic synthesis experiments, and terrestrial biota and sediments. These techniques allowed us (1) to identify a shift in terrestrial amino acid distributions secondary to diagenesis; (2) to detect differences in terrestrial distributions that may be systematic differences between extraction and analysis techniques in biological and geological laboratories; and (3) to determine that distributions in meteoritic samples appear more similar to prebiotic chemistry samples than they do to the terrestrial unaltered or diagenetic samples. Both diagenesis and putative interdisciplinary differences in analysis complicate interpretation of meteoritic amino acid distributions. We propose that the analysis of future samples from such diverse sources as meteoritic influx, sample return missions, and in situ exploration of Mars would be less ambiguous with adoption of standardized assay techniques, systematic inclusion of assay standards, and the use of a quantitative, probabilistic metric. We present here one such metric determined by sequential feature extraction and normalization (PCA), information-driven automated exploration of classification possibilities (HCA), and prediction of classification accuracy (ANNs).

  10. Prevention of artificial dental plaque formation in vitro by plant extracts.

    PubMed

    Smullen, J; Finney, M; Storey, D M; Foster, H A

    2012-10-01

    A number of previous studies have shown that plant extracts can inhibit formation of dental plaque. The ability of extracts of Rosmarinus officianalis L., Salvia officianalis L., unfermented cocoa, red grape seed and green tea to inhibit plaque bacteria, glucosyltransferase activity, glucan and plaque formation in an in vitro model using bovine teeth was examined. The antimicrobial activity of the plant extracts against oral bacteria was determined using a standard susceptibility agar dilution technique. Inhibition of growth and acid production from glucose and sucrose by Streptococcus mutans in liquid culture was investigated. Prevention of plaque formation on bovine teeth initiated by Strep. mutans was studied using an artificial mouth. The plant extracts inhibited the growth of oral bacteria and prevented acid production by Strep. mutans. Extracts inhibited glucosyltransferase activity and glucan production and inhibited adhesion to glass. Extracts of R. officianalis L. and S. officianalis L. at 0·25 mg ml(-1) reduced plaque growth by >80%. Green tea extract completely inhibited plaque formation but resulted in a greenish discolouration of the teeth which could not be removed by scrubbing. The plant extracts, particularly those from R. officianalis L. and S. officianalis L., inhibited glucosyltranferase activity, glucan production and plaque formation in vitro. The results suggest that the extracts of R. officianalis L. and S. officianalis L. may be useful as antiplaque agents in foods and dental preparations. Bovine teeth can be used as an alternative to hydroxyapatite for studies of plaque formation, but they need to be carefully sterilized before use. © 2012 The Authors Journal of Applied Microbiology © 2012 The Society for Applied Microbiology.

  11. Deep Learning: A Primer for Radiologists.

    PubMed

    Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An

    2017-01-01

    Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.

  12. Non-proliferative diabetic retinopathy symptoms detection and classification using neural network.

    PubMed

    Al-Jarrah, Mohammad A; Shatnawi, Hadeel

    2017-08-01

    Diabetic retinopathy (DR) causes blindness in the working age for people with diabetes in most countries. The increasing number of people with diabetes worldwide suggests that DR will continue to be major contributors to vision loss. Early detection of retinopathy progress in individuals with diabetes is critical for preventing visual loss. Non-proliferative DR (NPDR) is an early stage of DR. Moreover, NPDR can be classified into mild, moderate and severe. This paper proposes a novel morphology-based algorithm for detecting retinal lesions and classifying each case. First, the proposed algorithm detects the three DR lesions, namely haemorrhages, microaneurysms and exudates. Second, we defined and extracted a set of features from detected lesions. The set of selected feature emulates what physicians looked for in classifying NPDR case. Finally, we designed an artificial neural network (ANN) classifier with three layers to classify NPDR to normal, mild, moderate and severe. Bayesian regularisation and resilient backpropagation algorithms are used to train ANN. The accuracy for the proposed classifiers based on Bayesian regularisation and resilient backpropagation algorithms are 96.6 and 89.9, respectively. The obtained results are compared with results of the recent published classifier. Our proposed classifier outperforms the best in terms of sensitivity and specificity.

  13. Automatic breast tissue density estimation scheme in digital mammography images

    NASA Astrophysics Data System (ADS)

    Menechelli, Renan C.; Pacheco, Ana Luisa V.; Schiabel, Homero

    2017-03-01

    Cases of breast cancer have increased substantially each year. However, radiologists are subject to subjectivity and failures of interpretation which may affect the final diagnosis in this examination. The high density features in breast tissue are important factors related to these failures. Thus, among many functions some CADx (Computer-Aided Diagnosis) schemes are classifying breasts according to the predominant density. In order to aid in such a procedure, this work attempts to describe automated software for classification and statistical information on the percentage change in breast tissue density, through analysis of sub regions (ROIs) from the whole mammography image. Once the breast is segmented, the image is divided into regions from which texture features are extracted. Then an artificial neural network MLP was used to categorize ROIs. Experienced radiologists have previously determined the ROIs density classification, which was the reference to the software evaluation. From tests results its average accuracy was 88.7% in ROIs classification, and 83.25% in the classification of the whole breast density in the 4 BI-RADS density classes - taking into account a set of 400 images. Furthermore, when considering only a simplified two classes division (high and low densities) the classifier accuracy reached 93.5%, with AUC = 0.95.

  14. A novel approach for food intake detection using electroglottography

    PubMed Central

    Farooq, Muhammad; Fontana, Juan M; Sazonov, Edward

    2014-01-01

    Many methods for monitoring diet and food intake rely on subjects self-reporting their daily intake. These methods are subjective, potentially inaccurate and need to be replaced by more accurate and objective methods. This paper presents a novel approach that uses an Electroglottograph (EGG) device for an objective and automatic detection of food intake. Thirty subjects participated in a 4-visit experiment involving the consumption of meals with self-selected content. Variations in the electrical impedance across the larynx caused by the passage of food during swallowing were captured by the EGG device. To compare performance of the proposed method with a well-established acoustical method, a throat microphone was used for monitoring swallowing sounds. Both signals were segmented into non-overlapping epochs of 30 s and processed to extract wavelet features. Subject-independent classifiers were trained using Artificial Neural Networks, to identify periods of food intake from the wavelet features. Results from leave-one-out cross-validation showed an average per-epoch classification accuracy of 90.1% for the EGG-based method and 83.1% for the acoustic-based method, demonstrating the feasibility of using an EGG for food intake detection. PMID:24671094

  15. Application of self-organizing feature maps to analyze the relationships between ignitable liquids and selected mass spectral ions.

    PubMed

    Frisch-Daiello, Jessica L; Williams, Mary R; Waddell, Erin E; Sigman, Michael E

    2014-03-01

    The unsupervised artificial neural networks method of self-organizing feature maps (SOFMs) is applied to spectral data of ignitable liquids to visualize the grouping of similar ignitable liquids with respect to their American Society for Testing and Materials (ASTM) class designations and to determine the ions associated with each group. The spectral data consists of extracted ion spectra (EIS), defined as the time-averaged mass spectrum across the chromatographic profile for select ions, where the selected ions are a subset of ions from Table 2 of the ASTM standard E1618-11. Utilization of the EIS allows for inter-laboratory comparisons without the concern of retention time shifts. The trained SOFM demonstrates clustering of the ignitable liquid samples according to designated ASTM classes. The EIS of select samples designated as miscellaneous or oxygenated as well as ignitable liquid residues from fire debris samples are projected onto the SOFM. The results indicate the similarities and differences between the variables of the newly projected data compared to those of the data used to train the SOFM. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Determination of the bioaccessible fraction of metals in urban aerosol using simulated lung fluids

    NASA Astrophysics Data System (ADS)

    Coufalík, Pavel; Mikuška, Pavel; Matoušek, Tomáš; Večeřa, Zbyněk

    2016-09-01

    Determination of the bioaccessible fraction of metals in atmospheric aerosol is a significant issue with respect to air pollution in the urban environment. The aim of this work was to compare of metal bioaccessibility determined according to the extraction yields of six simulated lung fluids. Aerosol samples of the PM1 fraction were collected in Brno, Czech Republic. The total contents of Cd, Ce, Cr, Cu, Fe, Mn, Ni, Pb, V, and Zn in the samples were determined and their enrichment factors were calculated. The bioaccessible proportions of elements were determined by means of extraction in Gamble's solution, Gamble's solution with dipalmitoyl phosphatidyl choline (DPPC), artificial lysosomal fluid, saline, water, and in a newly proposed solution based on DPPC, referred to as "Simulated Alveoli Fluid" (SAF). The chemical composition and surface tension of the simulated lung fluids were the main parameters influencing extraction yields. Gamble's solutions and the newly designed solution of SAF exhibited the lowest extraction efficiency, and also had the lowest surface tensions. The bioaccessibility of particulate metals should be assessed by synthetic lung fluids with a low surface tension, which simulate better the behavior and composition of native lung surfactant. The bioaccessibility of metals in aerosol assessed by means of the extraction in water or artificial lysosomal fluid can be overestimated.

  17. The influence of topical application of grapeseed extract gel on enamel surface hardness after demineralization

    NASA Astrophysics Data System (ADS)

    Saragih, D. A.; Herda, E.; Triaminingsih, S.

    2017-08-01

    The aim of this study was to analyze the influence of topical application of 6.5% and 12.5% grapeseed extract gels for duration of application 16 and 32 minutes on the enamel surface hardness following tooth demineralization by an energy drink. The samples were 21 bovine teeth that underwent demineralization by immersion in the energy drink for 5 minutes in an incubator at 37°C. The demineralized specimens were randomly divided into a control group and 2 treatment groups. The control group was immersed in artificial saliva for 6 hours at 37°C, whereas the treatment groups were treated with topical 6.5% and 12.5% grapeseed extract gels for durations of 16 and 32 minutes and then immersed in artificial saliva for 6 hours at 37°C. The hardness was measured with a Knoop hardness tester. Statistical analysis by repeated ANOVA and one-way ANOVA revealed a significant increase in the enamel hardness value (p<0.05) after the application of the topical grapeseed extract gels at both concentrations. Application of 12.5% topical grapeseed extract gel for 32 minutes resulted in a restored hardness that insignificant diffrence from the initial hardness value obtained before demineralization (p>0.05).

  18. Classifying features in CT imagery: accuracy for some single- and multiple-species classifiers

    Treesearch

    Daniel L. Schmoldt; Jing He; A. Lynn Abbott

    1998-01-01

    Our current approach to automatically label features in CT images of hardwood logs classifies each pixel of an image individually. These feature classifiers use a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Initially, this type of...

  19. Mechanism of trail following by the arboreal termite Nasutitermes corniger (Isoptera: Termitidae).

    PubMed

    Gazai, Vinícius; Bailez, Omar; Viana-Bailez, Ana Maria

    2014-01-01

    In this study, we investigated the mechanisms used by the arboreal termite Nasutitermes corniger (Motschulsky, 1855) to follow trails from the nest to sources of food. A plate containing one of seven trail types was used to connect an artificial nest of N. corniger with an artificial foraging arena. The trail types were: termite trail; paraffined termite trail; trail made of paraffin; rectal fluid extract trail; sternal gland extract trail; feces extract trail; and solvent trail (control). In each test, the time was recorded from the start of the test until the occurrence of trail following, at which point the number of termites that followed the trail for least 5 cm in the first 3 min of observation was recorded. The delay for termites initiating trail following along the termite trail was lower (0.55 ± 0.16 min) than in the trails of sternal gland extract (1.05 ± 0.08 min) and trails of termite feces extract (1.57 ± 0.21 min) (F(2), (48) = 22.59, P < 0.001). The number of termites that followed the termite trail was greater (207.3 ± 17.3) than the number that followed the trail of termite feces extract (102.5 ± 9.4) or sternal gland extract (36, 9 ± 1.6) (F(2), (48) = 174.34, P < 0.001). Therefore, feces on the trail may play an important role alongside sternal gland pheromones in increasing the persistence of the trail.

  20. A fresh look at functional link neural network for motor imagery-based brain-computer interface.

    PubMed

    Hettiarachchi, Imali T; Babaei, Toktam; Nguyen, Thanh; Lim, Chee P; Nahavandi, Saeid

    2018-05-04

    Artificial neural networks (ANNs) are one of the widely used classifiers in the brain-computer interface (BCI) systems-based on noninvasive electroencephalography (EEG) signals. Among the different ANN architectures, the most commonly applied for BCI classifiers is the multilayer perceptron (MLP). When appropriately designed with optimal number of neuron layers and number of neurons per layer, the ANN can act as a universal approximator. However, due to the low signal-to-noise ratio of EEG signal data, overtraining problem may become an inherent issue, causing these universal approximators to fail in real-time applications. In this study we introduce a higher order neural network, namely the functional link neural network (FLNN) as a classifier for motor imagery (MI)-based BCI systems, to remedy the drawbacks in MLP. We compare the proposed method with competing classifiers such as linear decomposition analysis, naïve Bayes, k-nearest neighbours, support vector machine and three MLP architectures. Two multi-class benchmark datasets from the BCI competitions are used. Common spatial pattern algorithm is utilized for feature extraction to build classification models. FLNN reports the highest average Kappa value over multiple subjects for both the BCI competition datasets, under similarly preprocessed data and extracted features. Further, statistical comparison results over multiple subjects show that the proposed FLNN classification method yields the best performance among the competing classifiers. Findings from this study imply that the proposed method, which has less computational complexity compared to the MLP, can be implemented effectively in practical MI-based BCI systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. A novel scheme for automatic nonrigid image registration using deformation invariant feature and geometric constraint

    NASA Astrophysics Data System (ADS)

    Deng, Zhipeng; Lei, Lin; Zhou, Shilin

    2015-10-01

    Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

  2. An Extended Spectral-Spatial Classification Approach for Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Akbari, D.

    2017-11-01

    In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.

  3. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks.

    PubMed

    Huynh, Benjamin Q; Li, Hui; Giger, Maryellen L

    2016-07-01

    Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text

  4. Single-trial laser-evoked potentials feature extraction for prediction of pain perception.

    PubMed

    Huang, Gan; Xiao, Ping; Hu, Li; Hung, Yeung Sam; Zhang, Zhiguo

    2013-01-01

    Pain is a highly subjective experience, and the availability of an objective assessment of pain perception would be of great importance for both basic and clinical applications. The objective of the present study is to develop a novel approach to extract pain-related features from single-trial laser-evoked potentials (LEPs) for classification of pain perception. The single-trial LEP feature extraction approach combines a spatial filtering using common spatial pattern (CSP) and a multiple linear regression (MLR). The CSP method is effective in separating laser-evoked EEG response from ongoing EEG activity, while MLR is capable of automatically estimating the amplitudes and latencies of N2 and P2 from single-trial LEP waveforms. The extracted single-trial LEP features are used in a Naïve Bayes classifier to classify different levels of pain perceived by the subjects. The experimental results show that the proposed single-trial LEP feature extraction approach can effectively extract pain-related LEP features for achieving high classification accuracy.

  5. Alexnet Feature Extraction and Multi-Kernel Learning for Objectoriented Classification

    NASA Astrophysics Data System (ADS)

    Ding, L.; Li, H.; Hu, C.; Zhang, W.; Wang, S.

    2018-04-01

    In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.

  6. Classification and pose estimation of objects using nonlinear features

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.

  7. Finger vein recognition based on the hyperinformation feature

    NASA Astrophysics Data System (ADS)

    Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Yang, Lu

    2014-01-01

    The finger vein is a promising biometric pattern for personal identification due to its advantages over other existing biometrics. In finger vein recognition, feature extraction is a critical step, and many feature extraction methods have been proposed to extract the gray, texture, or shape of the finger vein. We treat them as low-level features and present a high-level feature extraction framework. Under this framework, base attribute is first defined to represent the characteristics of a certain subcategory of a subject. Then, for an image, the correlation coefficient is used for constructing the high-level feature, which reflects the correlation between this image and all base attributes. Since the high-level feature can reveal characteristics of more subcategories and contain more discriminative information, we call it hyperinformation feature (HIF). Compared with low-level features, which only represent the characteristics of one subcategory, HIF is more powerful and robust. In order to demonstrate the potential of the proposed framework, we provide a case study to extract HIF. We conduct comprehensive experiments to show the generality of the proposed framework and the efficiency of HIF on our databases, respectively. Experimental results show that HIF significantly outperforms the low-level features.

  8. Decomposition and extraction: a new framework for visual classification.

    PubMed

    Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng

    2014-08-01

    In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.

  9. On Features of the Generation of Artificial Ionospheric Irregularities with Transverse Scales of 50-200 m

    NASA Astrophysics Data System (ADS)

    Bolotin, I. A.; Frolov, V. L.; Akchurin, A. D.; Zykov, E. Yu.

    2017-05-01

    We consider the features of generation of artificial ionospheric irregularities with transverse (to the geomagnetic field) scales l⊥ ≈ 50-200 m in the ionosphere modified by high-power HF radio waves. It was found that there are at least two mechanisms for generation of these irregularities in the ionospheric F region. The first mechanism is related to the resonant interaction between radio waves and the ionospheric plasma, while the second one takes place even in the absence of the resonant interaction. Different polarization of the high-power radiation was used to separate the mechanisms in the measurements.

  10. Accelerating Biomedical Signal Processing Using GPU: A Case Study of Snore Sound Feature Extraction.

    PubMed

    Guo, Jian; Qian, Kun; Zhang, Gongxuan; Xu, Huijie; Schuller, Björn

    2017-12-01

    The advent of 'Big Data' and 'Deep Learning' offers both, a great challenge and a huge opportunity for personalised health-care. In machine learning-based biomedical data analysis, feature extraction is a key step for 'feeding' the subsequent classifiers. With increasing numbers of biomedical data, extracting features from these 'big' data is an intensive and time-consuming task. In this case study, we employ a Graphics Processing Unit (GPU) via Python to extract features from a large corpus of snore sound data. Those features can subsequently be imported into many well-known deep learning training frameworks without any format processing. The snore sound data were collected from several hospitals (20 subjects, with 770-990 MB per subject - in total 17.20 GB). Experimental results show that our GPU-based processing significantly speeds up the feature extraction phase, by up to seven times, as compared to the previous CPU system.

  11. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  12. Acousto-Optic Technology for Topographic Feature Extraction and Image Analysis.

    DTIC Science & Technology

    1981-03-01

    This report contains all findings of the acousto - optic technology study for feature extraction conducted by Deft Laboratories Inc. for the U.S. Army...topographic feature extraction and image analysis using acousto - optic (A-O) technology. A conclusion of this study was that A-O devices are potentially

  13. Prediction of occult invasive disease in ductal carcinoma in situ using computer-extracted mammographic features

    NASA Astrophysics Data System (ADS)

    Shi, Bibo; Grimm, Lars J.; Mazurowski, Maciej A.; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.

    2017-03-01

    Predicting the risk of occult invasive disease in ductal carcinoma in situ (DCIS) is an important task to help address the overdiagnosis and overtreatment problems associated with breast cancer. In this work, we investigated the feasibility of using computer-extracted mammographic features to predict occult invasive disease in patients with biopsy proven DCIS. We proposed a computer-vision algorithm based approach to extract mammographic features from magnification views of full field digital mammography (FFDM) for patients with DCIS. After an expert breast radiologist provided a region of interest (ROI) mask for the DCIS lesion, the proposed approach is able to segment individual microcalcifications (MCs), detect the boundary of the MC cluster (MCC), and extract 113 mammographic features from MCs and MCC within the ROI. In this study, we extracted mammographic features from 99 patients with DCIS (74 pure DCIS; 25 DCIS plus invasive disease). The predictive power of the mammographic features was demonstrated through binary classifications between pure DCIS and DCIS with invasive disease using linear discriminant analysis (LDA). Before classification, the minimum redundancy Maximum Relevance (mRMR) feature selection method was first applied to choose subsets of useful features. The generalization performance was assessed using Leave-One-Out Cross-Validation and Receiver Operating Characteristic (ROC) curve analysis. Using the computer-extracted mammographic features, the proposed model was able to distinguish DCIS with invasive disease from pure DCIS, with an average classification performance of AUC = 0.61 +/- 0.05. Overall, the proposed computer-extracted mammographic features are promising for predicting occult invasive disease in DCIS.

  14. Region of interest extraction based on multiscale visual saliency analysis for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan

    2015-01-01

    Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.

  15. FDI based on Artificial Neural Network for Low-Voltage-Ride-Through in DFIG-based Wind Turbine.

    PubMed

    Adouni, Amel; Chariag, Dhia; Diallo, Demba; Ben Hamed, Mouna; Sbita, Lassaâd

    2016-09-01

    As per modern electrical grid rules, Wind Turbine needs to operate continually even in presence severe grid faults as Low Voltage Ride Through (LVRT). Hence, a new LVRT Fault Detection and Identification (FDI) procedure has been developed to take the appropriate decision in order to develop the convenient control strategy. To obtain much better decision and enhanced FDI during grid fault, the proposed procedure is based on voltage indicators analysis using a new Artificial Neural Network architecture (ANN). In fact, two features are extracted (the amplitude and the angle phase). It is divided into two steps. The first is fault indicators generation and the second is indicators analysis for fault diagnosis. The first step is composed of six ANNs which are dedicated to describe the three phases of the grid (three amplitudes and three angle phases). Regarding to the second step, it is composed of a single ANN which analysis the indicators and generates a decision signal that describes the function mode (healthy or faulty). On other hand, the decision signal identifies the fault type. It allows distinguishing between the four faulty types. The diagnosis procedure is tested in simulation and experimental prototype. The obtained results confirm and approve its efficiency, rapidity, robustness and immunity to the noise and unknown inputs. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Using artificial intelligence to improve identification of nanofluid gas-liquid two-phase flow pattern in mini-channel

    NASA Astrophysics Data System (ADS)

    Xiao, Jian; Luo, Xiaoping; Feng, Zhenfei; Zhang, Jinxin

    2018-01-01

    This work combines fuzzy logic and a support vector machine (SVM) with a principal component analysis (PCA) to create an artificial-intelligence system that identifies nanofluid gas-liquid two-phase flow states in a vertical mini-channel. Flow-pattern recognition requires finding the operational details of the process and doing computer simulations and image processing can be used to automate the description of flow patterns in nanofluid gas-liquid two-phase flow. This work uses fuzzy logic and a SVM with PCA to improve the accuracy with which the flow pattern of a nanofluid gas-liquid two-phase flow is identified. To acquire images of nanofluid gas-liquid two-phase flow patterns of flow boiling, a high-speed digital camera was used to record four different types of flow-pattern images, namely annular flow, bubbly flow, churn flow, and slug flow. The textural features extracted by processing the images of nanofluid gas-liquid two-phase flow patterns are used as inputs to various identification schemes such as fuzzy logic, SVM, and SVM with PCA to identify the type of flow pattern. The results indicate that the SVM with reduced characteristics of PCA provides the best identification accuracy and requires less calculation time than the other two schemes. The data reported herein should be very useful for the design and operation of industrial applications.

  17. Wains: a pattern-seeking artificial life species.

    PubMed

    de Buitléir, Amy; Russell, Michael; Daly, Mark

    2012-01-01

    We describe the initial phase of a research project to develop an artificial life framework designed to extract knowledge from large data sets with minimal preparation or ramp-up time. In this phase, we evolved an artificial life population with a new brain architecture. The agents have sufficient intelligence to discover patterns in data and to make survival decisions based on those patterns. The species uses diploid reproduction, Hebbian learning, and Kohonen self-organizing maps, in combination with novel techniques such as using pattern-rich data as the environment and framing the data analysis as a survival problem for artificial life. The first generation of agents mastered the pattern discovery task well enough to thrive. Evolution further adapted the agents to their environment by making them a little more pessimistic, and also by making their brains more efficient.

  18. Model-based Bayesian signal extraction algorithm for peripheral nerves

    NASA Astrophysics Data System (ADS)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of controlling a prosthetic limb.

  19. ANALYSIS OF CLINICAL AND DERMOSCOPIC FEATURES FOR BASAL CELL CARCINOMA NEURAL NETWORK CLASSIFICATION

    PubMed Central

    Cheng, Beibei; Stanley, R. Joe; Stoecker, William V; Stricklin, Sherea M.; Hinton, Kristen A.; Nguyen, Thanh K.; Rader, Ryan K.; Rabinovitz, Harold S.; Oliviero, Margaret; Moss, Randy H.

    2012-01-01

    Background Basal cell carcinoma (BCC) is the most commonly diagnosed cancer in the United States. In this research, we examine four different feature categories used for diagnostic decisions, including patient personal profile (patient age, gender, etc.), general exam (lesion size and location), common dermoscopic (blue-gray ovoids, leaf-structure dirt trails, etc.), and specific dermoscopic lesion (white/pink areas, semitranslucency, etc.). Specific dermoscopic features are more restricted versions of the common dermoscopic features. Methods Combinations of the four feature categories are analyzed over a data set of 700 lesions, with 350 BCCs and 350 benign lesions, for lesion discrimination using neural network-based techniques, including Evolving Artificial Neural Networks and Evolving Artificial Neural Network Ensembles. Results Experiment results based on ten-fold cross validation for training and testing the different neural network-based techniques yielded an area under the receiver operating characteristic curve as high as 0.981 when all features were combined. The common dermoscopic lesion features generally yielded higher discrimination results than other individual feature categories. Conclusions Experimental results show that combining clinical and image information provides enhanced lesion discrimination capability over either information source separately. This research highlights the potential of data fusion as a model for the diagnostic process. PMID:22724561

  20. Technical Operations (TOPS) IV Task Order 0003: Responsive Interface for Transport Tuning (RITT)

    DTIC Science & Technology

    2016-05-29

    on the further development of artificial hair sensors (AHS) featuring a responsive carbon nanotube (CNT) array to serve as a piezoresistive element...under separate cover, as AFRL Interim Report AFRL-RX-WP-TR-2016-0071 dated 30 October 2015. 15. SUBJECT TERMS artificial hair sensor, carbon... Hair Sensors: Fabrication and Model Parameterization .......................... 4 3.1.1 Introduction

  1. The P600 in Implicit Artificial Grammar Learning

    ERIC Educational Resources Information Center

    Silva, Susana; Folia, Vasiliki; Hagoort, Peter; Petersson, Karl Magnus

    2017-01-01

    The suitability of the artificial grammar learning (AGL) paradigm to capture relevant aspects of the acquisition of linguistic structures has been empirically tested in a number of EEG studies. Some have shown a syntax-related P600 component, but it has not been ruled out that the AGL P600 effect is a response to surface features (e.g.,…

  2. Unsupervised Learning of Overlapping Image Components Using Divisive Input Modulation

    PubMed Central

    Spratling, M. W.; De Meyer, K.; Kompass, R.

    2009-01-01

    This paper demonstrates that nonnegative matrix factorisation is mathematically related to a class of neural networks that employ negative feedback as a mechanism of competition. This observation inspires a novel learning algorithm which we call Divisive Input Modulation (DIM). The proposed algorithm provides a mathematically simple and computationally efficient method for the unsupervised learning of image components, even in conditions where these elementary features overlap considerably. To test the proposed algorithm, a novel artificial task is introduced which is similar to the frequently-used bars problem but employs squares rather than bars to increase the degree of overlap between components. Using this task, we investigate how the proposed method performs on the parsing of artificial images composed of overlapping features, given the correct representation of the individual components; and secondly, we investigate how well it can learn the elementary components from artificial training images. We compare the performance of the proposed algorithm with its predecessors including variations on these algorithms that have produced state-of-the-art performance on the bars problem. The proposed algorithm is more successful than its predecessors in dealing with overlap and occlusion in the artificial task that has been used to assess performance. PMID:19424442

  3. Engagement Assessment Using EEG Signals

    NASA Technical Reports Server (NTRS)

    Li, Feng; Li, Jiang; McKenzie, Frederic; Zhang, Guangfan; Wang, Wei; Pepe, Aaron; Xu, Roger; Schnell, Thomas; Anderson, Nick; Heitkamp, Dean

    2012-01-01

    In this paper, we present methods to analyze and improve an EEG-based engagement assessment approach, consisting of data preprocessing, feature extraction and engagement state classification. During data preprocessing, spikes, baseline drift and saturation caused by recording devices in EEG signals are identified and eliminated, and a wavelet based method is utilized to remove ocular and muscular artifacts in the EEG recordings. In feature extraction, power spectrum densities with 1 Hz bin are calculated as features, and these features are analyzed using the Fisher score and the one way ANOVA method. In the classification step, a committee classifier is trained based on the extracted features to assess engagement status. Finally, experiment results showed that there exist significant differences in the extracted features among different subjects, and we have implemented a feature normalization procedure to mitigate the differences and significantly improved the engagement assessment performance.

  4. The optional selection of micro-motion feature based on Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Li, Bo; Ren, Hongmei; Xiao, Zhi-he; Sheng, Jing

    2017-11-01

    Micro-motion form of target is multiple, different micro-motion forms are apt to be modulated, which makes it difficult for feature extraction and recognition. Aiming at feature extraction of cone-shaped objects with different micro-motion forms, this paper proposes the best selection method of micro-motion feature based on support vector machine. After the time-frequency distribution of radar echoes, comparing the time-frequency spectrum of objects with different micro-motion forms, features are extracted based on the differences between the instantaneous frequency variations of different micro-motions. According to the methods based on SVM (Support Vector Machine) features are extracted, then the best features are acquired. Finally, the result shows the method proposed in this paper is feasible under the test condition of certain signal-to-noise ratio(SNR).

  5. A Review of Feature Extraction Software for Microarray Gene Expression Data

    PubMed Central

    Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini

    2014-01-01

    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315

  6. The application of artificial intelligence for the identification of the maceral groups and mineral components of coal

    NASA Astrophysics Data System (ADS)

    Mlynarczuk, Mariusz; Skiba, Marta

    2017-06-01

    The correct and consistent identification of the petrographic properties of coal is an important issue for researchers in the fields of mining and geology. As part of the study described in this paper, investigations concerning the application of artificial intelligence methods for the identification of the aforementioned characteristics were carried out. The methods in question were used to identify the maceral groups of coal, i.e. vitrinite, inertinite, and liptinite. Additionally, an attempt was made to identify some non-organic minerals. The analyses were performed using pattern recognition techniques (NN, kNN), as well as artificial neural network techniques (a multilayer perceptron - MLP). The classification process was carried out using microscopy images of polished sections of coals. A multidimensional feature space was defined, which made it possible to classify the discussed structures automatically, based on the methods of pattern recognition and algorithms of the artificial neural networks. Also, from the study we assessed the impact of the parameters for which the applied methods proved effective upon the final outcome of the classification procedure. The result of the analyses was a high percentage (over 97%) of correct classifications of maceral groups and mineral components. The paper discusses also an attempt to analyze particular macerals of the inertinite group. It was demonstrated that using artificial neural networks to this end makes it possible to classify the macerals properly in over 91% of cases. Thus, it was proved that artificial intelligence methods can be successfully applied for the identification of selected petrographic features of coal.

  7. "Artificial But Better Than Nothing".

    PubMed

    Blaschke, Sarah; O'Callaghan, Clare C; Schofield, Penelope

    2017-04-01

    To investigate patient, staff, and carer responses to an environmental intervention in an oncology clinic waiting room and evaluate the acceptability of artificial plant materials. Design Postintervention: Cross-sectional survey study. Oncology outpatient clinic waiting room located in a metropolitan comprehensive cancer center in Australia. Observer ratings of perceived qualities and effects of lifelike (fake) plants while spending time in the waiting room. Convenience sample ( N = 143) consisted of 73 cancer patients, 13 staff, 52 carers, and 5 "others" aged between 24 and 89 years ( M = 56, SD = 14.5). Artificial plant arrangements, hanging installations, two movable green walls, and one rock garden on wheels placed throughout the outpatients' clinic waiting room. Eighty-one percent (115/142) of respondents noticed the green features when first entering the waiting room and 67% (90/134) noticed they were artificial. Eighty-one percent (115/142) indicated "like/like a lot" when reporting their first reaction to the green features. Forty-eight percent (68/143) were positively affected and 23% (33/143) were very positively affected. Eighty-one percent (110/135) agreed/strongly agreed that "The greenery brightens the waiting room," 62% (80/130) agreed/strongly agreed that they "prefer living plants," and 76% (101/133) agreed/strongly agreed that "'lifelike' plants are better than no plants." Comments included mostly positive appraisals and occasional adverse reactions to artificial plants. No significant differences were found between patients', staff, and carers' reactions. The environmental intervention positively impacted patients', staff, and carers' perceptions of the oncology waiting room environment. Patients, staff, and carers mostly accepted artificial plants as an alternative design solution to real plants.

  8. Discrimination of artificial categories structured by family resemblances: a comparative study in people (Homo sapiens) and pigeons (Columba livia).

    PubMed

    Makino, Hiroshi; Jitsumori, Masako

    2007-02-01

    Adult humans (Homo sapiens) and pigeons (Columba livia) were trained to discriminate artificial categories that the authors created by mimicking 2 properties of natural categories. One was a family resemblance relationship: The highly variable exemplars, including those that did not have features in common, were structured by a similarity network with the features correlating to one another in each category. The other was a polymorphous rule: No single feature was essential for distinguishing the categories, and all the features overlapped between the categories. Pigeons learned the categories with ease and then showed a prototype effect in accord with the degrees of family resemblance for novel stimuli. Some evidence was also observed for interactive effects of learning of individual exemplars and feature frequencies. Humans had difficulty in learning the categories. The participants who learned the categories generally responded to novel stimuli in an all-or-none fashion on the basis of their acquired classification decision rules. The processes that underlie the classification performances of the 2 species are discussed.

  9. Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals

    PubMed Central

    Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu

    2012-01-01

    Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017

  10. An Optimal Mean Based Block Robust Feature Extraction Method to Identify Colorectal Cancer Genes with Integrated Data.

    PubMed

    Liu, Jian; Cheng, Yuhu; Wang, Xuesong; Zhang, Lin; Liu, Hui

    2017-08-17

    It is urgent to diagnose colorectal cancer in the early stage. Some feature genes which are important to colorectal cancer development have been identified. However, for the early stage of colorectal cancer, less is known about the identity of specific cancer genes that are associated with advanced clinical stage. In this paper, we conducted a feature extraction method named Optimal Mean based Block Robust Feature Extraction method (OMBRFE) to identify feature genes associated with advanced colorectal cancer in clinical stage by using the integrated colorectal cancer data. Firstly, based on the optimal mean and L 2,1 -norm, a novel feature extraction method called Optimal Mean based Robust Feature Extraction method (OMRFE) is proposed to identify feature genes. Then the OMBRFE method which introduces the block ideology into OMRFE method is put forward to process the colorectal cancer integrated data which includes multiple genomic data: copy number alterations, somatic mutations, methylation expression alteration, as well as gene expression changes. Experimental results demonstrate that the OMBRFE is more effective than previous methods in identifying the feature genes. Moreover, genes identified by OMBRFE are verified to be closely associated with advanced colorectal cancer in clinical stage.

  11. A judicious multiple hypothesis tracker with interacting feature extraction

    NASA Astrophysics Data System (ADS)

    McAnanama, James G.; Kirubarajan, T.

    2009-05-01

    The multiple hypotheses tracker (mht) is recognized as an optimal tracking method due to the enumeration of all possible measurement-to-track associations, which does not involve any approximation in its original formulation. However, its practical implementation is limited by the NP-hard nature of this enumeration. As a result, a number of maintenance techniques such as pruning and merging have been proposed to bound the computational complexity. It is possible to improve the performance of a tracker, mht or not, using feature information (e.g., signal strength, size, type) in addition to kinematic data. However, in most tracking systems, the extraction of features from the raw sensor data is typically independent of the subsequent association and filtering stages. In this paper, a new approach, called the Judicious Multi Hypotheses Tracker (jmht), whereby there is an interaction between feature extraction and the mht, is presented. The measure of the quality of feature extraction is input into measurement-to-track association while the prediction step feeds back the parameters to be used in the next round of feature extraction. The motivation for this forward and backward interaction between feature extraction and tracking is to improve the performance in both steps. This approach allows for a more rational partitioning of the feature space and removes unlikely features from the assignment problem. Simulation results demonstrate the benefits of the proposed approach.

  12. A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.

    PubMed

    Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun

    2017-07-01

    Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time results show that the proposed method has less time complexity after feature selection. The proposed feature extraction method is very effective for getting representatives information from mental states EEG signals in BCI applications and reducing the computational complexity of classifiers by reducing the number of extracted features. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. STDP-based spiking deep convolutional neural networks for object recognition.

    PubMed

    Kheradpisheh, Saeed Reza; Ganjtabesh, Mohammad; Thorpe, Simon J; Masquelier, Timothée

    2018-03-01

    Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. User-oriented summary extraction for soccer video based on multimodal analysis

    NASA Astrophysics Data System (ADS)

    Liu, Huayong; Jiang, Shanshan; He, Tingting

    2011-11-01

    An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.

  15. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    NASA Astrophysics Data System (ADS)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  16. Mapping surface disturbance of energy-related infrastructure in southwest Wyoming--An assessment of methods

    USGS Publications Warehouse

    Germaine, Stephen S.; O'Donnell, Michael S.; Aldridge, Cameron L.; Baer, Lori; Fancher, Tammy; McBeth, Jamie; McDougal, Robert R.; Waltermire, Robert; Bowen, Zachary H.; Diffendorfer, James; Garman, Steven; Hanson, Leanne

    2012-01-01

    We evaluated how well three leading information-extraction software programs (eCognition, Feature Analyst, Feature Extraction) and manual hand digitization interpreted information from remotely sensed imagery of a visually complex gas field in Wyoming. Specifically, we compared how each mapped the area of and classified the disturbance features present on each of three remotely sensed images, including 30-meter-resolution Landsat, 10-meter-resolution SPOT (Satellite Pour l'Observation de la Terre), and 0.6-meter resolution pan-sharpened QuickBird scenes. Feature Extraction mapped the spatial area of disturbance features most accurately on the Landsat and QuickBird imagery, while hand digitization was most accurate on the SPOT imagery. Footprint non-overlap error was smallest on the Feature Analyst map of the Landsat imagery, the hand digitization map of the SPOT imagery, and the Feature Extraction map of the QuickBird imagery. When evaluating feature classification success against a set of ground-truthed control points, Feature Analyst, Feature Extraction, and hand digitization classified features with similar success on the QuickBird and SPOT imagery, while eCognition classified features poorly relative to the other methods. All maps derived from Landsat imagery classified disturbance features poorly. Using the hand digitized QuickBird data as a reference and making pixel-by-pixel comparisons, Feature Extraction classified features best overall on the QuickBird imagery, and Feature Analyst classified features best overall on the SPOT and Landsat imagery. Based on the entire suite of tasks we evaluated, Feature Extraction performed best overall on the Landsat and QuickBird imagery, while hand digitization performed best overall on the SPOT imagery, and eCognition performed worst overall on all three images. Error rates for both area measurements and feature classification were prohibitively high on Landsat imagery, while QuickBird was time and cost prohibitive for mapping large spatial extents. The SPOT imagery produced map products that were far more accurate than Landsat and did so at a far lower cost than QuickBird imagery. Consideration of degree of map accuracy required, costs associated with image acquisition, software, operator and computation time, and tradeoffs in the form of spatial extent versus resolution should all be considered when evaluating which combination of imagery and information-extraction method might best serve any given land use mapping project. When resources permit, attaining imagery that supports the highest classification and measurement accuracy possible is recommended.

  17. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  18. Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.

    2018-04-01

    A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.

  19. iFeature: a python package and web server for features extraction and selection from protein and peptide sequences.

    PubMed

    Chen, Zhen; Zhao, Pei; Li, Fuyi; Leier, André; Marquez-Lago, Tatiana T; Wang, Yanan; Webb, Geoffrey I; Smith, A Ian; Daly, Roger J; Chou, Kuo-Chen; Song, Jiangning

    2018-03-08

    Structural and physiochemical descriptors extracted from sequence data have been widely used to represent sequences and predict structural, functional, expression and interaction profiles of proteins and peptides as well as DNAs/RNAs. Here, we present iFeature, a versatile Python-based toolkit for generating various numerical feature representation schemes for both protein and peptide sequences. iFeature is capable of calculating and extracting a comprehensive spectrum of 18 major sequence encoding schemes that encompass 53 different types of feature descriptors. It also allows users to extract specific amino acid properties from the AAindex database. Furthermore, iFeature integrates 12 different types of commonly used feature clustering, selection, and dimensionality reduction algorithms, greatly facilitating training, analysis, and benchmarking of machine-learning models. The functionality of iFeature is made freely available via an online web server and a stand-alone toolkit. http://iFeature.erc.monash.edu/; https://github.com/Superzchen/iFeature/. jiangning.song@monash.edu; kcchou@gordonlifescience.org; roger.daly@monash.edu. Supplementary data are available at Bioinformatics online.

  20. A graph-Laplacian-based feature extraction algorithm for neural spike sorting.

    PubMed

    Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos

    2009-01-01

    Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.

  1. Educational Data Mining Application for Estimating Students Performance in Weka Environment

    NASA Astrophysics Data System (ADS)

    Gowri, G. Shiyamala; Thulasiram, Ramasamy; Amit Baburao, Mahindra

    2017-11-01

    Educational data mining (EDM) is a multi-disciplinary research area that examines artificial intelligence, statistical modeling and data mining with the data generated from an educational institution. EDM utilizes computational ways to deal with explicate educational information keeping in mind the end goal to examine educational inquiries. To make a country stand unique among the other nations of the world, the education system has to undergo a major transition by redesigning its framework. The concealed patterns and data from various information repositories can be extracted by adopting the techniques of data mining. In order to summarize the performance of students with their credentials, we scrutinize the exploitation of data mining in the field of academics. Apriori algorithmic procedure is extensively applied to the database of students for a wider classification based on various categorizes. K-means procedure is applied to the same set of databases in order to accumulate them into a specific category. Apriori algorithm deals with mining the rules in order to extract patterns that are similar along with their associations in relation to various set of records. The records can be extracted from academic information repositories. The parameters used in this study gives more importance to psychological traits than academic features. The undesirable student conduct can be clearly witnessed if we make use of information mining frameworks. Thus, the algorithms efficiently prove to profile the students in any educational environment. The ultimate objective of the study is to suspect if a student is prone to violence or not.

  2. Robust digital image watermarking using distortion-compensated dither modulation

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Yuan, Xiaochen

    2018-04-01

    In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.

  3. Difet: Distributed Feature Extraction Tool for High Spatial Resolution Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Eken, S.; Aydın, E.; Sayar, A.

    2017-11-01

    In this paper, we propose distributed feature extraction tool from high spatial resolution remote sensing images. Tool is based on Apache Hadoop framework and Hadoop Image Processing Interface. Two corner detection (Harris and Shi-Tomasi) algorithms and five feature descriptors (SIFT, SURF, FAST, BRIEF, and ORB) are considered. Robustness of the tool in the task of feature extraction from LandSat-8 imageries are evaluated in terms of horizontal scalability.

  4. Enhanced Solubility and Permeability of Salicis cortex Extract by Formulating as a Microemulsion.

    PubMed

    Piazzini, Vieri; Bigagli, Elisabetta; Luceri, Cristina; Bilia, Anna Rita; Bergonzi, Maria Camilla

    2018-04-24

    A microemulsion system was developed and investigated as a novel oral formulation to increase the solubility and absorption of Salicis cortex extract. This extract possesses many pharmacological activities, in particular, it is beneficial for back pain and osteoarthritic and rheumatic complaints. In this work, after qualitative and quantitative characterization of the extract and the validation of an HPLC/diode array detector analytical method, solubility studies were performed to choose the best components for microemulsion formulation. The optimized microemulsion consisted of 2.5 g of triacetin, as the oil phase, 2.5 g of Tween 20 as the surfactant, 2.5 g of labrasol as the cosurfactant, and 5 g of water. The microemulsion was visually checked, characterized by light scattering techniques and morphological observations. The developed formulation appeared transparent, the droplet size was around 40 nm, and the ζ -potential result was negative. The maximum loading content of Salicis cortex extract resulted in 40 mg/mL. Furthermore, storage stability studies and an in vitro digestion assay were performed. The advantages offered by microemulsion were evaluated in vitro using artificial membranes and cells, i.e., parallel artificial membrane permeability assay and a Caco-2 model. Both studies proved that the microemulsion was successful in enhancing the permeation of extract compounds, so it could be useful to ameliorate the bioefficacy of Salicis cortex. Georg Thieme Verlag KG Stuttgart · New York.

  5. Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures

    NASA Astrophysics Data System (ADS)

    Li, Quanbao; Wei, Fajie; Zhou, Shenghan

    2017-05-01

    The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.

  6. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification

    PubMed Central

    Wen, Tingxi; Zhang, Zhongnan

    2017-01-01

    Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789

  7. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  8. Mining protein database using machine learning techniques.

    PubMed

    Camargo, Renata da Silva; Niranjan, Mahesan

    2008-08-25

    With a large amount of information relating to proteins accumulating in databases widely available online, it is of interest to apply machine learning techniques that, by extracting underlying statistical regularities in the data, make predictions about the functional and evolutionary characteristics of unseen proteins. Such predictions can help in achieving a reduction in the space over which experiment designers need to search in order to improve our understanding of the biochemical properties. Previously it has been suggested that an integration of features computable by comparing a pair of proteins can be achieved by an artificial neural network, hence predicting the degree to which they may be evolutionary related and homologous.
    We compiled two datasets of pairs of proteins, each pair being characterised by seven distinct features. We performed an exhaustive search through all possible combinations of features, for the problem of separating remote homologous from analogous pairs, we note that significant performance gain was obtained by the inclusion of sequence and structure information. We find that the use of a linear classifier was enough to discriminate a protein pair at the family level. However, at the superfamily level, to detect remote homologous pairs was a relatively harder problem. We find that the use of nonlinear classifiers achieve significantly higher accuracies.
    In this paper, we compare three different pattern classification methods on two problems formulated as detecting evolutionary and functional relationships between pairs of proteins, and from extensive cross validation and feature selection based studies quantify the average limits and uncertainties with which such predictions may be made. Feature selection points to a \\"knowledge gap\\" in currently available functional annotations. We demonstrate how the scheme may be employed in a framework to associate an individual protein with an existing family of evolutionarily related proteins.

  9. Diabetic Rethinopathy Screening by Bright Lesions Extraction from Fundus Images

    NASA Astrophysics Data System (ADS)

    Hanđsková, Veronika; Pavlovičova, Jarmila; Oravec, Miloš; Blaško, Radoslav

    2013-09-01

    Retinal images are nowadays widely used to diagnose many diseases, for example diabetic retinopathy. In our work, we propose the algorithm for the screening application, which identifies the patients with such severe diabetic complication as diabetic retinopathy is, in early phase. In the application we use the patient's fundus photography without any additional examination by an ophtalmologist. After this screening identification, other examination methods should be considered and the patient's follow-up by a doctor is necessary. Our application is composed of three principal modules including fundus image preprocessing, feature extraction and feature classification. Image preprocessing module has the role of luminance normalization, contrast enhancement and optical disk masking. Feature extraction module includes two stages: bright lesions candidates localization and candidates feature extraction. We selected 16 statistical and structural features. For feature classification, we use multilayer perceptron (MLP) with one hidden layer. We classify images into two classes. Feature classification efficiency is about 93 percent.

  10. Segmentation of retinal blood vessels using artificial neural networks for early detection of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mann, Kulwinder S.; Kaur, Sukhpreet

    2017-06-01

    There are various eye diseases in the patients suffering from the diabetes which includes Diabetic Retinopathy, Glaucoma, Hypertension etc. These all are the most common sight threatening eye diseases due to the changes in the blood vessel structure. The proposed method using supervised methods concluded that the segmentation of the retinal blood vessels can be performed accurately using neural networks training. It uses features which include Gray level features; Moment Invariant based features, Gabor filtering, Intensity feature, Vesselness feature for feature vector computation. Then the feature vector is calculated using only the prominent features.

  11. Supervised Learning in CINets

    DTIC Science & Technology

    2011-07-01

    supervised learning process is compared to that of Artificial Neural Network ( ANNs ), fuzzy logic rule set, and Bayesian network approaches...of both fuzzy logic systems and Artificial Neural Networks ( ANNs ). Like fuzzy logic systems, the CINet technique allows the use of human- intuitive...fuzzy rule systems [3] CINets also maintain features common to both fuzzy systems and ANNs . The technique can be be shown to possess the property

  12. Acoustic⁻Seismic Mixed Feature Extraction Based on Wavelet Transform for Vehicle Classification in Wireless Sensor Networks.

    PubMed

    Zhang, Heng; Pan, Zhongming; Zhang, Wenna

    2018-06-07

    An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.

  13. Extraction of ECG signal with adaptive filter for hearth abnormalities detection

    NASA Astrophysics Data System (ADS)

    Turnip, Mardi; Saragih, Rijois. I. E.; Dharma, Abdi; Esti Kusumandari, Dwi; Turnip, Arjon; Sitanggang, Delima; Aisyah, Siti

    2018-04-01

    This paper demonstrates an adaptive filter method for extraction ofelectrocardiogram (ECG) feature in hearth abnormalities detection. In particular, electrocardiogram (ECG) is a recording of the heart's electrical activity by capturing a tracingof cardiac electrical impulse as it moves from the atrium to the ventricles. The applied algorithm is to evaluate and analyze ECG signals for abnormalities detection based on P, Q, R and S peaks. In the first phase, the real-time ECG data is acquired and pre-processed. In the second phase, the procured ECG signal is subjected to feature extraction process. The extracted features detect abnormal peaks present in the waveform. Thus the normal and abnormal ECG signal could be differentiated based on the features extracted.

  14. Artificial Neural Network for Probabilistic Feature Recognition in Liquid Chromatography Coupled to High-Resolution Mass Spectrometry.

    PubMed

    Woldegebriel, Michael; Derks, Eduard

    2017-01-17

    In this work, a novel probabilistic untargeted feature detection algorithm for liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS) using artificial neural network (ANN) is presented. The feature detection process is approached as a pattern recognition problem, and thus, ANN was utilized as an efficient feature recognition tool. Unlike most existing feature detection algorithms, with this approach, any suspected chromatographic profile (i.e., shape of a peak) can easily be incorporated by training the network, avoiding the need to perform computationally expensive regression methods with specific mathematical models. In addition, with this method, we have shown that the high-resolution raw data can be fully utilized without applying any arbitrary thresholds or data reduction, therefore improving the sensitivity of the method for compound identification purposes. Furthermore, opposed to existing deterministic (binary) approaches, this method rather estimates the probability of a feature being present/absent at a given point of interest, thus giving chance for all data points to be propagated down the data analysis pipeline, weighed with their probability. The algorithm was tested with data sets generated from spiked samples in forensic and food safety context and has shown promising results by detecting features for all compounds in a computationally reasonable time.

  15. Recursive Feature Extraction in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  16. Optimal spatiotemporal representation of multichannel EEG for recognition of brain states associated with distinct visual stimulus

    NASA Astrophysics Data System (ADS)

    Hramov, Alexander; Musatov, Vyacheslav Yu.; Runnova, Anastasija E.; Efremova, Tatiana Yu.; Koronovskii, Alexey A.; Pisarchik, Alexander N.

    2018-04-01

    In the paper we propose an approach based on artificial neural networks for recognition of different human brain states associated with distinct visual stimulus. Based on the developed numerical technique and the analysis of obtained experimental multichannel EEG data, we optimize the spatiotemporal representation of multichannel EEG to provide close to 97% accuracy in recognition of the EEG brain states during visual perception. Different interpretations of an ambiguous image produce different oscillatory patterns in the human EEG with similar features for every interpretation. Since these features are inherent to all subjects, a single artificial network can classify with high quality the associated brain states of other subjects.

  17. Robust image features: concentric contrasting circles and their image extraction

    NASA Astrophysics Data System (ADS)

    Gatrell, Lance B.; Hoff, William A.; Sklair, Cheryl W.

    1992-03-01

    Many computer vision tasks can be simplified if special image features are placed on the objects to be recognized. A review of special image features that have been used in the past is given and then a new image feature, the concentric contrasting circle, is presented. The concentric contrasting circle image feature has the advantages of being easily manufactured, easily extracted from the image, robust extraction (true targets are found, while few false targets are found), it is a passive feature, and its centroid is completely invariant to the three translational and one rotational degrees of freedom and nearly invariant to the remaining two rotational degrees of freedom. There are several examples of existing parallel implementations which perform most of the extraction work. Extraction robustness was measured by recording the probability of correct detection and the false alarm rate in a set of images of scenes containing mockups of satellites, fluid couplings, and electrical components. A typical application of concentric contrasting circle features is to place them on modeled objects for monocular pose estimation or object identification. This feature is demonstrated on a visually challenging background of a specular but wrinkled surface similar to a multilayered insulation spacecraft thermal blanket.

  18. Deep Learning Methods for Underwater Target Feature Extraction and Recognition

    PubMed Central

    Peng, Yuan; Qiu, Mengran; Shi, Jianfei; Liu, Liangliang

    2018-01-01

    The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved. PMID:29780407

  19. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  20. A method for real-time implementation of HOG feature extraction

    NASA Astrophysics Data System (ADS)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

Top