Sample records for feature extraction stage

  1. Extracting features from protein sequences to improve deep extreme learning machine for protein fold recognition.

    PubMed

    Ibrahim, Wisam; Abadeh, Mohammad Saniee

    2017-05-21

    Protein fold recognition is an important problem in bioinformatics to predict three-dimensional structure of a protein. One of the most challenging tasks in protein fold recognition problem is the extraction of efficient features from the amino-acid sequences to obtain better classifiers. In this paper, we have proposed six descriptors to extract features from protein sequences. These descriptors are applied in the first stage of a three-stage framework PCA-DELM-LDA to extract feature vectors from the amino-acid sequences. Principal Component Analysis PCA has been implemented to reduce the number of extracted features. The extracted feature vectors have been used with original features to improve the performance of the Deep Extreme Learning Machine DELM in the second stage. Four new features have been extracted from the second stage and used in the third stage by Linear Discriminant Analysis LDA to classify the instances into 27 folds. The proposed framework is implemented on the independent and combined feature sets in SCOP datasets. The experimental results show that extracted feature vectors in the first stage could improve the performance of DELM in extracting new useful features in second stage. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. An Optimal Mean Based Block Robust Feature Extraction Method to Identify Colorectal Cancer Genes with Integrated Data.

    PubMed

    Liu, Jian; Cheng, Yuhu; Wang, Xuesong; Zhang, Lin; Liu, Hui

    2017-08-17

    It is urgent to diagnose colorectal cancer in the early stage. Some feature genes which are important to colorectal cancer development have been identified. However, for the early stage of colorectal cancer, less is known about the identity of specific cancer genes that are associated with advanced clinical stage. In this paper, we conducted a feature extraction method named Optimal Mean based Block Robust Feature Extraction method (OMBRFE) to identify feature genes associated with advanced colorectal cancer in clinical stage by using the integrated colorectal cancer data. Firstly, based on the optimal mean and L 2,1 -norm, a novel feature extraction method called Optimal Mean based Robust Feature Extraction method (OMRFE) is proposed to identify feature genes. Then the OMBRFE method which introduces the block ideology into OMRFE method is put forward to process the colorectal cancer integrated data which includes multiple genomic data: copy number alterations, somatic mutations, methylation expression alteration, as well as gene expression changes. Experimental results demonstrate that the OMBRFE is more effective than previous methods in identifying the feature genes. Moreover, genes identified by OMBRFE are verified to be closely associated with advanced colorectal cancer in clinical stage.

  3. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

    PubMed

    Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.

  4. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics

    PubMed Central

    Chriskos, Panteleimon; Frantzidis, Christos A.; Gkivogkli, Polyxeni T.; Bamidis, Panagiotis D.; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the “ENVIHAB” facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging. PMID:29628883

  5. Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.

    PubMed

    Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel

    2017-08-18

    Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among conventional methods, some of them slightly performed better than others, although the choice of a suitable technique is dependent on the computational complexity and accuracy requirements of the user.

  6. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  7. Feature extraction and selection strategies for automated target recognition

    NASA Astrophysics Data System (ADS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-04-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof- interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  8. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  9. A graph-Laplacian-based feature extraction algorithm for neural spike sorting.

    PubMed

    Ghanbari, Yasser; Spence, Larry; Papamichalis, Panos

    2009-01-01

    Analysis of extracellular neural spike recordings is highly dependent upon the accuracy of neural waveform classification, commonly referred to as spike sorting. Feature extraction is an important stage of this process because it can limit the quality of clustering which is performed in the feature space. This paper proposes a new feature extraction method (which we call Graph Laplacian Features, GLF) based on minimizing the graph Laplacian and maximizing the weighted variance. The algorithm is compared with Principal Components Analysis (PCA, the most commonly-used feature extraction method) using simulated neural data. The results show that the proposed algorithm produces more compact and well-separated clusters compared to PCA. As an added benefit, tentative cluster centers are output which can be used to initialize a subsequent clustering stage.

  10. Local binary pattern variants-based adaptive texture features analysis for posed and nonposed facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki

    2017-09-01

    Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.

  11. Automatic sleep staging using empirical mode decomposition, discrete wavelet transform, time-domain, and nonlinear dynamics features of heart rate variability signals.

    PubMed

    Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer

    2013-10-01

    The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV segments classified by the LD classifier. A combination of linear/nonlinear features from HRV signals is effective in automatic sleep staging. Moreover, time-frequency features are more informative than others. In addition, a separability measure and classification results showed that HRV signal features, especially nonlinear features, extracted from 5-min segments are more discriminative than those from 0.5-min segments in automatic sleep staging. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Classification of product inspection items using nonlinear features

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, H.-W.

    1998-03-01

    Automated processing and classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. This approach involves two main steps: preprocessing and classification. Preprocessing locates individual items and segments ones that touch using a modified watershed algorithm. The second stage involves extraction of features that allow discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper. We use a new nonlinear feature extraction scheme called the maximum representation and discriminating feature (MRDF) extraction method to compute nonlinear features that are used as inputs to a classifier. The MRDF is shown to provide better classification and a better ROC (receiver operating characteristic) curve than other methods.

  13. EOG and EMG: two important switches in automatic sleep stage classification.

    PubMed

    Estrada, E; Nazeran, H; Barragan, J; Burk, J R; Lucas, E A; Behbehani, K

    2006-01-01

    Sleep is a natural periodic state of rest for the body, in which the eyes are usually closed and consciousness is completely or partially lost. In this investigation we used the EOG and EMG signals acquired from 10 patients undergoing overnight polysomnography with their sleep stages determined by expert sleep specialists based on RK rules. Differentiation between Stage 1, Awake and REM stages challenged a well trained neural network classifier to distinguish between classes when only EEG-derived signal features were used. To meet this challenge and improve the classification rate, extra features extracted from EOG and EMG signals were fed to the classifier. In this study, two simple feature extraction algorithms were applied to EOG and EMG signals. The statistics of the results were calculated and displayed in an easy to visualize fashion to observe tendencies for each sleep stage. Inclusion of these features show a great promise to improve the classification rate towards the target rate of 100%

  14. Feature extraction for document text using Latent Dirichlet Allocation

    NASA Astrophysics Data System (ADS)

    Prihatini, P. M.; Suryawan, I. K.; Mandia, IN

    2018-01-01

    Feature extraction is one of stages in the information retrieval system that used to extract the unique feature values of a text document. The process of feature extraction can be done by several methods, one of which is Latent Dirichlet Allocation. However, researches related to text feature extraction using Latent Dirichlet Allocation method are rarely found for Indonesian text. Therefore, through this research, a text feature extraction will be implemented for Indonesian text. The research method consists of data acquisition, text pre-processing, initialization, topic sampling and evaluation. The evaluation is done by comparing Precision, Recall and F-Measure value between Latent Dirichlet Allocation and Term Frequency Inverse Document Frequency KMeans which commonly used for feature extraction. The evaluation results show that Precision, Recall and F-Measure value of Latent Dirichlet Allocation method is higher than Term Frequency Inverse Document Frequency KMeans method. This shows that Latent Dirichlet Allocation method is able to extract features and cluster Indonesian text better than Term Frequency Inverse Document Frequency KMeans method.

  15. Application of wavelet transformation and adaptive neighborhood based modified backpropagation (ANMBP) for classification of brain cancer

    NASA Astrophysics Data System (ADS)

    Werdiningsih, Indah; Zaman, Badrus; Nuqoba, Barry

    2017-08-01

    This paper presents classification of brain cancer using wavelet transformation and Adaptive Neighborhood Based Modified Backpropagation (ANMBP). Three stages of the processes, namely features extraction, features reduction, and classification process. Wavelet transformation is used for feature extraction and ANMBP is used for classification process. The result of features extraction is feature vectors. Features reduction used 100 energy values per feature and 10 energy values per feature. Classifications of brain cancer are normal, alzheimer, glioma, and carcinoma. Based on simulation results, 10 energy values per feature can be used to classify brain cancer correctly. The correct classification rate of proposed system is 95 %. This research demonstrated that wavelet transformation can be used for features extraction and ANMBP can be used for classification of brain cancer.

  16. Heuristic algorithm for optical character recognition of Arabic script

    NASA Astrophysics Data System (ADS)

    Yarman-Vural, Fatos T.; Atici, A.

    1996-02-01

    In this paper, a heuristic method is developed for segmentation, feature extraction and recognition of the Arabic script. The study is part of a large project for the transcription of the documents in Ottoman Archives. A geometrical and topological feature analysis method is developed for segmentation and feature extraction stages. Chain code transformation is applied to main strokes of the characters which are then classified by the hidden Markov model (HMM) in the recognition stage. Experimental results indicate that the performance of the proposed method is impressive, provided that the thinning process does not yield spurious branches.

  17. Urinary bladder cancer T-staging from T2-weighted MR images using an optimal biomarker approach

    NASA Astrophysics Data System (ADS)

    Wang, Chuang; Udupa, Jayaram K.; Tong, Yubing; Chen, Jerry; Venigalla, Sriram; Odhner, Dewey; Guzzo, Thomas J.; Christodouleas, John; Torigian, Drew A.

    2018-02-01

    Magnetic resonance imaging (MRI) is often used in clinical practice to stage patients with bladder cancer to help plan treatment. However, qualitative assessment of MR images is prone to inaccuracies, adversely affecting patient outcomes. In this paper, T2-weighted MR image-based quantitative features were extracted from the bladder wall in 65 patients with bladder cancer to classify them into two primary tumor (T) stage groups: group 1 - T stage < T2, with primary tumor locally confined to the bladder, and group 2 - T stage < T2, with primary tumor locally extending beyond the bladder. The bladder was divided into 8 sectors in the axial plane, where each sector has a corresponding reference standard T stage that is based on expert radiology qualitative MR image review and histopathologic results. The performance of the classification for correct assignment of T stage grouping was then evaluated at both the patient level and the sector level. Each bladder sector was divided into 3 shells (inner, middle, and outer), and 15,834 features including intensity features and texture features from local binary pattern and gray-level co-occurrence matrix were extracted from the 3 shells of each sector. An optimal feature set was selected from all features using an optimal biomarker approach. Nine optimal biomarker features were derived based on texture properties from the middle shell, with an area under the ROC curve of AUC value at the sector and patient level of 0.813 and 0.806, respectively.

  18. Real-Time Detection and Measurement of Eye Features from Color Images

    PubMed Central

    Borza, Diana; Darabant, Adrian Sergiu; Danescu, Radu

    2016-01-01

    The accurate extraction and measurement of eye features is crucial to a variety of domains, including human-computer interaction, biometry, and medical research. This paper presents a fast and accurate method for extracting multiple features around the eyes: the center of the pupil, the iris radius, and the external shape of the eye. These features are extracted using a multistage algorithm. On the first stage the pupil center is localized using a fast circular symmetry detector and the iris radius is computed using radial gradient projections, and on the second stage the external shape of the eye (of the eyelids) is determined through a Monte Carlo sampling framework based on both color and shape information. Extensive experiments performed on a different dataset demonstrate the effectiveness of our approach. In addition, this work provides eye annotation data for a publicly-available database. PMID:27438838

  19. An expert system based on principal component analysis, artificial immune system and fuzzy k-NN for diagnosis of valvular heart diseases.

    PubMed

    Sengur, Abdulkadir

    2008-03-01

    In the last two decades, the use of artificial intelligence methods in medical analysis is increasing. This is mainly because the effectiveness of classification and detection systems have improved a great deal to help the medical experts in diagnosing. In this work, we investigate the use of principal component analysis (PCA), artificial immune system (AIS) and fuzzy k-NN to determine the normal and abnormal heart valves from the Doppler heart sounds. The proposed heart valve disorder detection system is composed of three stages. The first stage is the pre-processing stage. Filtering, normalization and white de-noising are the processes that were used in this stage. The feature extraction is the second stage. During feature extraction stage, wavelet packet decomposition was used. As a next step, wavelet entropy was considered as features. For reducing the complexity of the system, PCA was used for feature reduction. In the classification stage, AIS and fuzzy k-NN were used. To evaluate the performance of the proposed methodology, a comparative study is realized by using a data set containing 215 samples. The validation of the proposed method is measured by using the sensitivity and specificity parameters; 95.9% sensitivity and 96% specificity rate was obtained.

  20. Prostate cancer detection: Fusion of cytological and textural features.

    PubMed

    Nguyen, Kien; Jain, Anil K; Sabata, Bikash

    2011-01-01

    A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification.

  1. Prostate cancer detection: Fusion of cytological and textural features

    PubMed Central

    Nguyen, Kien; Jain, Anil K.; Sabata, Bikash

    2011-01-01

    A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification. PMID:22811959

  2. Features extraction in anterior and posterior cruciate ligaments analysis.

    PubMed

    Zarychta, P

    2015-12-01

    The main aim of this research is finding the feature vectors of the anterior and posterior cruciate ligaments (ACL and PCL). These feature vectors have to clearly define the ligaments structure and make it easier to diagnose them. Extraction of feature vectors is obtained by analysis of both anterior and posterior cruciate ligaments. This procedure is performed after the extraction process of both ligaments. In the first stage in order to reduce the area of analysis a region of interest including cruciate ligaments (CL) is outlined in order to reduce the area of analysis. In this case, the fuzzy C-means algorithm with median modification helping to reduce blurred edges has been implemented. After finding the region of interest (ROI), the fuzzy connectedness procedure is performed. This procedure permits to extract the anterior and posterior cruciate ligament structures. In the last stage, on the basis of the extracted anterior and posterior cruciate ligament structures, 3-dimensional models of the anterior and posterior cruciate ligament are built and the feature vectors created. This methodology has been implemented in MATLAB and tested on clinical T1-weighted magnetic resonance imaging (MRI) slices of the knee joint. The 3D display is based on the Visualization Toolkit (VTK). Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Interictal Epileptiform Discharges (IEDs) classification in EEG data of epilepsy patients

    NASA Astrophysics Data System (ADS)

    Puspita, J. W.; Soemarno, G.; Jaya, A. I.; Soewono, E.

    2017-12-01

    Interictal Epileptiform Dischargers (IEDs), which consists of spike waves and sharp waves, in human electroencephalogram (EEG) are characteristic signatures of epilepsy. Spike waves are characterized by a pointed peak with a duration of 20-70 ms, while sharp waves has a duration of 70-200 ms. The purpose of the study was to classify spike wave and sharp wave of EEG data of epilepsy patients using Backpropagation Neural Network. The proposed method consists of two main stages: feature extraction stage and classification stage. In the feature extraction stage, we use frequency, amplitude and statistical feature, such as mean, standard deviation, and median, of each wave. The frequency values of the IEDs are very sensitive to the selection of the wave baseline. The selected baseline must contain all data of rising and falling slopes of the IEDs. Thus, we have a feature that is able to represent the type of IEDs, appropriately. The results show that the proposed method achieves the best classification results with the recognition rate of 93.75 % for binary sigmoid activation function and learning rate of 0.1.

  4. Extraction of latent images from printed media

    NASA Astrophysics Data System (ADS)

    Sergeyev, Vladislav; Fedoseev, Victor

    2015-12-01

    In this paper we propose an automatic technology for extraction of latent images from printed media such as documents, banknotes, financial securities, etc. This technology includes image processing by adaptively constructed Gabor filter bank for obtaining feature images, as well as subsequent stages of feature selection, grouping and multicomponent segmentation. The main advantage of the proposed technique is versatility: it allows to extract latent images made by different texture variations. Experimental results showing performance of the method over another known system for latent image extraction are given.

  5. Analysis and automatic identification of sleep stages using higher order spectra.

    PubMed

    Acharya, U Rajendra; Chua, Eric Chern-Pin; Chua, Kuang Chua; Min, Lim Choo; Tamura, Toshiyo

    2010-12-01

    Electroencephalogram (EEG) signals are widely used to study the activity of the brain, such as to determine sleep stages. These EEG signals are nonlinear and non-stationary in nature. It is difficult to perform sleep staging by visual interpretation and linear techniques. Thus, we use a nonlinear technique, higher order spectra (HOS), to extract hidden information in the sleep EEG signal. In this study, unique bispectrum and bicoherence plots for various sleep stages were proposed. These can be used as visual aid for various diagnostics application. A number of HOS based features were extracted from these plots during the various sleep stages (Wakefulness, Rapid Eye Movement (REM), Stage 1-4 Non-REM) and they were found to be statistically significant with p-value lower than 0.001 using ANOVA test. These features were fed to a Gaussian mixture model (GMM) classifier for automatic identification. Our results indicate that the proposed system is able to identify sleep stages with an accuracy of 88.7%.

  6. Bird sound spectrogram decomposition through Non-Negative Matrix Factorization for the acoustic classification of bird species.

    PubMed

    Ludeña-Choez, Jimmy; Quispe-Soncco, Raisa; Gallardo-Antolín, Ascensión

    2017-01-01

    Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC.

  7. Bird sound spectrogram decomposition through Non-Negative Matrix Factorization for the acoustic classification of bird species

    PubMed Central

    Quispe-Soncco, Raisa

    2017-01-01

    Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC. PMID:28628630

  8. Neural Network Target Identification System for False Alarm Reduction

    NASA Technical Reports Server (NTRS)

    Ye, David; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feed forward back propagation neural network (NN) is then trained to classify each feature vector and remove false positives. This paper discusses the test of the system performance and parameter optimizations process which adapts the system to various targets and datasets. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar image dataset.

  9. Diabetic Rethinopathy Screening by Bright Lesions Extraction from Fundus Images

    NASA Astrophysics Data System (ADS)

    Hanđsková, Veronika; Pavlovičova, Jarmila; Oravec, Miloš; Blaško, Radoslav

    2013-09-01

    Retinal images are nowadays widely used to diagnose many diseases, for example diabetic retinopathy. In our work, we propose the algorithm for the screening application, which identifies the patients with such severe diabetic complication as diabetic retinopathy is, in early phase. In the application we use the patient's fundus photography without any additional examination by an ophtalmologist. After this screening identification, other examination methods should be considered and the patient's follow-up by a doctor is necessary. Our application is composed of three principal modules including fundus image preprocessing, feature extraction and feature classification. Image preprocessing module has the role of luminance normalization, contrast enhancement and optical disk masking. Feature extraction module includes two stages: bright lesions candidates localization and candidates feature extraction. We selected 16 statistical and structural features. For feature classification, we use multilayer perceptron (MLP) with one hidden layer. We classify images into two classes. Feature classification efficiency is about 93 percent.

  10. Palmprint verification using Lagrangian decomposition and invariant interest points

    NASA Astrophysics Data System (ADS)

    Gupta, P.; Rattani, A.; Kisku, D. R.; Hwang, C. J.; Sing, J. K.

    2011-06-01

    This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique. We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally, identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the system.

  11. Automatic stage identification of Drosophila egg chamber based on DAPI images

    PubMed Central

    Jia, Dongyu; Xu, Qiuping; Xie, Qian; Mio, Washington; Deng, Wu-Min

    2016-01-01

    The Drosophila egg chamber, whose development is divided into 14 stages, is a well-established model for developmental biology. However, visual stage determination can be a tedious, subjective and time-consuming task prone to errors. Our study presents an objective, reliable and repeatable automated method for quantifying cell features and classifying egg chamber stages based on DAPI images. The proposed approach is composed of two steps: 1) a feature extraction step and 2) a statistical modeling step. The egg chamber features used are egg chamber size, oocyte size, egg chamber ratio and distribution of follicle cells. Methods for determining the on-site of the polytene stage and centripetal migration are also discussed. The statistical model uses linear and ordinal regression to explore the stage-feature relationships and classify egg chamber stages. Combined with machine learning, our method has great potential to enable discovery of hidden developmental mechanisms. PMID:26732176

  12. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    PubMed

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  13. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    PubMed

    Zhang, Junming; Wu, Yan

    2018-03-28

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  14. Recognition of Indian Sign Language in Live Video

    NASA Astrophysics Data System (ADS)

    Singha, Joyeeta; Das, Karen

    2013-05-01

    Sign Language Recognition has emerged as one of the important area of research in Computer Vision. The difficulty faced by the researchers is that the instances of signs vary with both motion and appearance. Thus, in this paper a novel approach for recognizing various alphabets of Indian Sign Language is proposed where continuous video sequences of the signs have been considered. The proposed system comprises of three stages: Preprocessing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, histogram matching. Eigen values and Eigen Vectors were considered for feature extraction stage and finally Eigen value weighted Euclidean distance is used to recognize the sign. It deals with bare hands, thus allowing the user to interact with the system in natural way. We have considered 24 different alphabets in the video sequences and attained a success rate of 96.25%.

  15. Paroxysmal atrial fibrillation prediction based on HRV analysis and non-dominated sorting genetic algorithm III.

    PubMed

    Boon, K H; Khalil-Hani, M; Malarvili, M B

    2018-01-01

    This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Dietary fibre components and pectin chemical features of peels during ripening in banana and plantain varieties.

    PubMed

    Happi Emaga, Thomas; Robert, Christelle; Ronkart, Sébastien N; Wathelet, Bernard; Paquot, Michel

    2008-07-01

    The effects of the ripeness stage of banana (Musa AAA) and plantain (Musa AAB) peels on neutral detergent fibre, acid detergent fibre, cellulose, hemicelluloses, lignin, pectin contents, and pectin chemical features were studied. Plantain peels contained a higher amount of lignin but had a lower hemicellulose content than banana peels. A sequential extraction of pectins showed that acid extraction was the most efficient to isolate banana peel pectins, whereas an ammonium oxalate extraction was more appropriate for plantain peels. In all the stages of maturation, the pectin content in banana peels was higher compared to plantain peels. Moreover, the galacturonic acid and methoxy group contents in banana peels were higher than in plantain peels. The average molecular weights of the extracted pectins were in the range of 132.6-573.8 kDa and were not dependant on peel variety, while the stage of maturation did not affect the dietary fibre yields and the composition in pectic polysaccharides in a consistent manner. This study has showed that banana peels are a potential source of dietary fibres and pectins.

  17. Nonlinear features for product inspection

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1999-03-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data.

  18. Decomposition and extraction: a new framework for visual classification.

    PubMed

    Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng

    2014-08-01

    In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.

  19. Multiple-Primitives Hierarchical Classification of Airborne Laser Scanning Data in Urban Areas

    NASA Astrophysics Data System (ADS)

    Ni, H.; Lin, X. G.; Zhang, J. X.

    2017-09-01

    A hierarchical classification method for Airborne Laser Scanning (ALS) data of urban areas is proposed in this paper. This method is composed of three stages among which three types of primitives are utilized, i.e., smooth surface, rough surface, and individual point. In the first stage, the input ALS data is divided into smooth surfaces and rough surfaces by employing a step-wise point cloud segmentation method. In the second stage, classification based on smooth surfaces and rough surfaces is performed. Points in the smooth surfaces are first classified into ground and buildings based on semantic rules. Next, features of rough surfaces are extracted. Then, points in rough surfaces are classified into vegetation and vehicles based on the derived features and Random Forests (RF). In the third stage, point-based features are extracted for the ground points, and then, an individual point classification procedure is performed to classify the ground points into bare land, artificial ground and greenbelt. Moreover, the shortages of the existing studies are analyzed, and experiments show that the proposed method overcomes these shortages and handles more types of objects.

  20. Improving EMG based classification of basic hand movements using EMD.

    PubMed

    Sapsanis, Christos; Georgoulas, George; Tzes, Anthony; Lymberopoulos, Dimitrios

    2013-01-01

    This paper presents a pattern recognition approach for the identification of basic hand movements using surface electromyographic (EMG) data. The EMG signal is decomposed using Empirical Mode Decomposition (EMD) into Intrinsic Mode Functions (IMFs) and subsequently a feature extraction stage takes place. Various combinations of feature subsets are tested using a simple linear classifier for the detection task. Our results suggest that the use of EMD can increase the discrimination ability of the conventional feature sets extracted from the raw EMG signal.

  1. Stimulus encoding and feature extraction by multiple sensory neurons.

    PubMed

    Krahe, Rüdiger; Kreiman, Gabriel; Gabbiani, Fabrizio; Koch, Christof; Metzner, Walter

    2002-03-15

    Neighboring cells in topographical sensory maps may transmit similar information to the next higher level of processing. How information transmission by groups of nearby neurons compares with the performance of single cells is a very important question for understanding the functioning of the nervous system. To tackle this problem, we quantified stimulus-encoding and feature extraction performance by pairs of simultaneously recorded electrosensory pyramidal cells in the hindbrain of weakly electric fish. These cells constitute the output neurons of the first central nervous stage of electrosensory processing. Using random amplitude modulations (RAMs) of a mimic of the fish's own electric field within behaviorally relevant frequency bands, we found that pyramidal cells with overlapping receptive fields exhibit strong stimulus-induced correlations. To quantify the encoding of the RAM time course, we estimated the stimuli from simultaneously recorded spike trains and found significant improvements over single spike trains. The quality of stimulus reconstruction, however, was still inferior to the one measured for single primary sensory afferents. In an analysis of feature extraction, we found that spikes of pyramidal cell pairs coinciding within a time window of a few milliseconds performed significantly better at detecting upstrokes and downstrokes of the stimulus compared with isolated spikes and even spike bursts of single cells. Coincident spikes can thus be considered "distributed bursts." Our results suggest that stimulus encoding by primary sensory afferents is transformed into feature extraction at the next processing stage. There, stimulus-induced coincident activity can improve the extraction of behaviorally relevant features from the stimulus.

  2. Nonlinear features for classification and pose estimation of machined parts from single views

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-10-01

    A new nonlinear feature extraction method is presented for classification and pose estimation of objects from single views. The feature extraction method is called the maximum representation and discrimination feature (MRDF) method. The nonlinear MRDF transformations to use are obtained in closed form, and offer significant advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We consider MRDFs on image data, provide a new 2-stage nonlinear MRDF solution, and show it specializes to well-known linear and nonlinear image processing transforms under certain conditions. We show the use of MRDF in estimating the class and pose of images of rendered solid CAD models of machine parts from single views using a feature-space trajectory neural network classifier. We show new results with better classification and pose estimation accuracy than are achieved by standard principal component analysis and Fukunaga-Koontz feature extraction methods.

  3. Automatic sleep staging using multi-dimensional feature extraction and multi-kernel fuzzy support vector machine.

    PubMed

    Zhang, Yanjun; Zhang, Xiangmin; Liu, Wenhui; Luo, Yuxi; Yu, Enjia; Zou, Keju; Liu, Xiaoliang

    2014-01-01

    This paper employed the clinical Polysomnographic (PSG) data, mainly including all-night Electroencephalogram (EEG), Electrooculogram (EOG) and Electromyogram (EMG) signals of subjects, and adopted the American Academy of Sleep Medicine (AASM) clinical staging manual as standards to realize automatic sleep staging. Authors extracted eighteen different features of EEG, EOG and EMG in time domains and frequency domains to construct the vectors according to the existing literatures as well as clinical experience. By adopting sleep samples self-learning, the linear combination of weights and parameters of multiple kernels of the fuzzy support vector machine (FSVM) were learned and the multi-kernel FSVM (MK-FSVM) was constructed. The overall agreement between the experts' scores and the results presented was 82.53%. Compared with previous results, the accuracy of N1 was improved to some extent while the accuracies of other stages were approximate, which well reflected the sleep structure. The staging algorithm proposed in this paper is transparent, and worth further investigation.

  4. DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG.

    PubMed

    Supratak, Akara; Dong, Hao; Wu, Chao; Guo, Yike

    2017-11-01

    This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channel EEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-long short-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%-81.7, Sleep-EDF: 82.0%-76.9) compared with the state-of-the-art methods (MASS: 85.9%-80.5, Sleep-EDF: 78.9%-73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features.

  5. EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.

    PubMed

    Diykh, Mohammed; Li, Yan; Wen, Peng

    2016-11-01

    The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method.

  6. New nonlinear features for inspection, robotics, and face recognition

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit

    1999-10-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data. Other applications of these new feature spaces in robotics and face recognition are also noted.

  7. Deep Learning Methods for Underwater Target Feature Extraction and Recognition

    PubMed Central

    Peng, Yuan; Qiu, Mengran; Shi, Jianfei; Liu, Liangliang

    2018-01-01

    The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved. PMID:29780407

  8. A judicious multiple hypothesis tracker with interacting feature extraction

    NASA Astrophysics Data System (ADS)

    McAnanama, James G.; Kirubarajan, T.

    2009-05-01

    The multiple hypotheses tracker (mht) is recognized as an optimal tracking method due to the enumeration of all possible measurement-to-track associations, which does not involve any approximation in its original formulation. However, its practical implementation is limited by the NP-hard nature of this enumeration. As a result, a number of maintenance techniques such as pruning and merging have been proposed to bound the computational complexity. It is possible to improve the performance of a tracker, mht or not, using feature information (e.g., signal strength, size, type) in addition to kinematic data. However, in most tracking systems, the extraction of features from the raw sensor data is typically independent of the subsequent association and filtering stages. In this paper, a new approach, called the Judicious Multi Hypotheses Tracker (jmht), whereby there is an interaction between feature extraction and the mht, is presented. The measure of the quality of feature extraction is input into measurement-to-track association while the prediction step feeds back the parameters to be used in the next round of feature extraction. The motivation for this forward and backward interaction between feature extraction and tracking is to improve the performance in both steps. This approach allows for a more rational partitioning of the feature space and removes unlikely features from the assignment problem. Simulation results demonstrate the benefits of the proposed approach.

  9. Automated identification of sleep states from EEG signals by means of ensemble empirical mode decomposition and random under sampling boosting.

    PubMed

    Hassan, Ahnaf Rashik; Bhuiyan, Mohammed Imamul Hassan

    2017-03-01

    Automatic sleep staging is essential for alleviating the burden of the physicians of analyzing a large volume of data by visual inspection. It is also a precondition for making an automated sleep monitoring system feasible. Further, computerized sleep scoring will expedite large-scale data analysis in sleep research. Nevertheless, most of the existing works on sleep staging are either multichannel or multiple physiological signal based which are uncomfortable for the user and hinder the feasibility of an in-home sleep monitoring device. So, a successful and reliable computer-assisted sleep staging scheme is yet to emerge. In this work, we propose a single channel EEG based algorithm for computerized sleep scoring. In the proposed algorithm, we decompose EEG signal segments using Ensemble Empirical Mode Decomposition (EEMD) and extract various statistical moment based features. The effectiveness of EEMD and statistical features are investigated. Statistical analysis is performed for feature selection. A newly proposed classification technique, namely - Random under sampling boosting (RUSBoost) is introduced for sleep stage classification. This is the first implementation of EEMD in conjunction with RUSBoost to the best of the authors' knowledge. The proposed feature extraction scheme's performance is investigated for various choices of classification models. The algorithmic performance of our scheme is evaluated against contemporary works in the literature. The performance of the proposed method is comparable or better than that of the state-of-the-art ones. The proposed algorithm gives 88.07%, 83.49%, 92.66%, 94.23%, and 98.15% for 6-state to 2-state classification of sleep stages on Sleep-EDF database. Our experimental outcomes reveal that RUSBoost outperforms other classification models for the feature extraction framework presented in this work. Besides, the algorithm proposed in this work demonstrates high detection accuracy for the sleep states S1 and REM. Statistical moment based features in the EEMD domain distinguish the sleep states successfully and efficaciously. The automated sleep scoring scheme propounded herein can eradicate the onus of the clinicians, contribute to the device implementation of a sleep monitoring system, and benefit sleep research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Age group classification and gender detection based on forced expiratory spirometry.

    PubMed

    Cosgun, Sema; Ozbek, I Yucel

    2015-08-01

    This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.

  11. Learning representations for the early detection of sepsis with deep neural networks.

    PubMed

    Kam, Hye Jin; Kim, Ha Young

    2017-10-01

    Sepsis is one of the leading causes of death in intensive care unit patients. Early detection of sepsis is vital because mortality increases as the sepsis stage worsens. This study aimed to develop detection models for the early stage of sepsis using deep learning methodologies, and to compare the feasibility and performance of the new deep learning methodology with those of the regression method with conventional temporal feature extraction. Study group selection adhered to the InSight model. The results of the deep learning-based models and the InSight model were compared. With deep feedforward networks, the area under the ROC curve (AUC) of the models were 0.887 and 0.915 for the InSight and the new feature sets, respectively. For the model with the combined feature set, the AUC was the same as that of the basic feature set (0.915). For the long short-term memory model, only the basic feature set was applied and the AUC improved to 0.929 compared with the existing 0.887 of the InSight model. The contributions of this paper can be summarized in three ways: (i) improved performance without feature extraction using domain knowledge, (ii) verification of feature extraction capability of deep neural networks through comparison with reference features, and (iii) improved performance with feedforward neural networks using long short-term memory, a neural network architecture that can learn sequential patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  13. Random-Forest Classification of High-Resolution Remote Sensing Images and Ndsm Over Urban Areas

    NASA Astrophysics Data System (ADS)

    Sun, X. F.; Lin, X. G.

    2017-09-01

    As an intermediate step between raw remote sensing data and digital urban maps, remote sensing data classification has been a challenging and long-standing research problem in the community of remote sensing. In this work, an effective classification method is proposed for classifying high-resolution remote sensing data over urban areas. Starting from high resolution multi-spectral images and 3D geometry data, our method proceeds in three main stages: feature extraction, classification, and classified result refinement. First, we extract color, vegetation index and texture features from the multi-spectral image and compute the height, elevation texture and differential morphological profile (DMP) features from the 3D geometry data. Then in the classification stage, multiple random forest (RF) classifiers are trained separately, then combined to form a RF ensemble to estimate each sample's category probabilities. Finally the probabilities along with the feature importance indicator outputted by RF ensemble are used to construct a fully connected conditional random field (FCCRF) graph model, by which the classification results are refined through mean-field based statistical inference. Experiments on the ISPRS Semantic Labeling Contest dataset show that our proposed 3-stage method achieves 86.9% overall accuracy on the test data.

  14. SU-E-I-85: Exploring the 18F-Fluorodeoxyglucose PET Characteristics in Staging of Esophageal Squamous Cell Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, C; Yin, Y

    2014-06-01

    Purpose: The aim of this study was to explore the characteristics derived from 18F-fluorodeoxyglucose (18F-FDG) PET image and assess its capacity in staging of esophageal squamous cell carcinoma (ESCC). Methods: 26 patients with newly diagnosed ESCC who underwent 18F-FDG PET scan were included in this study. Different image-derived indices including the standardized uptake value (SUV), gross tumor length, texture features and shape feature were considered. Taken the histopathologic examination as the gold standard, the extracted capacities of indices in staging of ESCC were assessed by Kruskal-Wallis test and Mann-Whitney test. Specificity and sensitivity for each of the studied parameters weremore » derived using receiver-operating characteristic curves. Results: 18F-FDG SUVmax and SUVmean showed statistically significant capability in AJCC and TNM stages. Texture features such as ENT and CORR were significant factors for N stages(p=0.040, p=0.029). Both FDG PET Longitudinal length and shape feature Eccentricity (EC) (p≤0.010) provided powerful stratification in the primary ESCC AJCC and TNM stages than SUV and texture features. Receiver-operating-characteristic curve analysis showed that tumor textural analysis can capability M stages with higher sensitivity than SUV measurement but lower in T and N stages. Conclusion: The 18F-FDG image-derived characteristics of SUV, textural features and shape feature allow for good stratification AJCC and TNM stage in ESCC patients.« less

  15. Multi-Stage System for Automatic Target Recognition

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Lu, Thomas T.; Ye, David; Edens, Weston; Johnson, Oliver

    2010-01-01

    A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feedforward back-propagation neural network (NN) is then trained to classify each feature vector and to remove false positives. The system parameter optimizations process has been developed to adapt to various targets and datasets. The objective was to design an efficient computer vision system that can learn to detect multiple targets in large images with unknown backgrounds. Because the target size is small relative to the image size in this problem, there are many regions of the image that could potentially contain the target. A cursory analysis of every region can be computationally efficient, but may yield too many false positives. On the other hand, a detailed analysis of every region can yield better results, but may be computationally inefficient. The multi-stage ATR system was designed to achieve an optimal balance between accuracy and computational efficiency by incorporating both models. The detection stage first identifies potential ROIs where the target may be present by performing a fast Fourier domain OT-MACH filter-based correlation. Because threshold for this stage is chosen with the goal of detecting all true positives, a number of false positives are also detected as ROIs. The verification stage then transforms the regions of interest into feature space, and eliminates false positives using an artificial neural network classifier. The multi-stage system allows tuning the detection sensitivity and the identification specificity individually in each stage. It is easier to achieve optimized ATR operation based on its specific goal. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar and video image datasets.

  16. K-65-12.8 condensing steam turbine

    NASA Astrophysics Data System (ADS)

    Valamin, A. E.; Kultyshev, A. Yu.; Gol'dberg, A. A.; Sakhnin, Yu. A.; Bilan, V. N.; Stepanov, M. Yu.; Polyaeva, E. N.; Shekhter, M. V.; Shibaev, T. L.

    2016-11-01

    A new condensing steam turbine K-65-12.8 is considered, which is the continuation of the development of the steam turbine family of 50-70 MW and the fresh steam pressure of 12.8 MPa, such as twocylinder T-50-12.8 and T-60/65-12.8 turbines. The turbine was developed using the modular design. The design and the main distinctive features of the turbine are described, such as a single two-housing cylinder with the steam flow loop; the extraction from the blading section for the regeneration, the inner needs, and heating; and the unification of some assemblies of serial turbines with shorter time of manufacture. The turbine uses the throttling steam distribution; steam from a boiler is supplied to a turbine through a separate valve block consisting of a central shut-off valve and two side control valves. The blading section of a turbine consists of 23 stages: the left flow contains ten stages installed in the inner housing and the right flow contains 13 stages with diaphragm placed in holders installed in the outer housing. The disks of the first 16 stages are forged together with a rotor, and the disks of the rest stages are mounted. Before the two last stages, the uncontrolled steam extraction is performed for the heating of a plant with the heat output of 38-75 GJ/h. Also, a turbine has five regenerative extraction points for feed water heating and the additional steam extraction to a collector for the inner needs with the consumption of up to 10 t/h. The feasibility parameters of a turbine plant are given. The main solutions for the heat flow diagram and the layout of a turbine plant are presented. The main principles and features of the microprocessor electro hydraulic control and protection system are formulated.

  17. Word Recognition: Theoretical Issues and Instructional Hints.

    ERIC Educational Resources Information Center

    Smith, Edward E.; Kleiman, Glenn M.

    Research on adult readers' word recognition skills is used in this paper to develop a general information processing model of reading. Stages of the model include feature extraction, interpretation, lexical access, working memory, and integration. Of those stages, particular attention is given to the units of interpretation, speech recoding and…

  18. Computer-aided screening system for cervical precancerous cells based on field emission scanning electron microscopy and energy dispersive x-ray images and spectra

    NASA Astrophysics Data System (ADS)

    Jusman, Yessi; Ng, Siew-Cheok; Hasikin, Khairunnisa; Kurnia, Rahmadi; Osman, Noor Azuan Bin Abu; Teoh, Kean Hooi

    2016-10-01

    The capability of field emission scanning electron microscopy and energy dispersive x-ray spectroscopy (FE-SEM/EDX) to scan material structures at the microlevel and characterize the material with its elemental properties has inspired this research, which has developed an FE-SEM/EDX-based cervical cancer screening system. The developed computer-aided screening system consisted of two parts, which were the automatic features of extraction and classification. For the automatic features extraction algorithm, the image and spectra of cervical cells features extraction algorithm for extracting the discriminant features of FE-SEM/EDX data was introduced. The system automatically extracted two types of features based on FE-SEM/EDX images and FE-SEM/EDX spectra. Textural features were extracted from the FE-SEM/EDX image using a gray level co-occurrence matrix technique, while the FE-SEM/EDX spectra features were calculated based on peak heights and corrected area under the peaks using an algorithm. A discriminant analysis technique was employed to predict the cervical precancerous stage into three classes: normal, low-grade intraepithelial squamous lesion (LSIL), and high-grade intraepithelial squamous lesion (HSIL). The capability of the developed screening system was tested using 700 FE-SEM/EDX spectra (300 normal, 200 LSIL, and 200 HSIL cases). The accuracy, sensitivity, and specificity performances were 98.2%, 99.0%, and 98.0%, respectively.

  19. A new adaptive algorithm for automated feature extraction in exponentially damped signals for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Qarib, Hossein; Adeli, Hojjat

    2015-12-01

    In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, D; Aryal, M; Samuels, S

    Purpose: A previous study showed that large sub-volumes of tumor with low blood volume (BV) (poorly perfused) in head-and-neck (HN) cancers are significantly associated with local-regional failure (LRF) after chemoradiation therapy, and could be targeted with intensified radiation doses. This study aimed to develop an automated and scalable model to extract voxel-wise contrast-enhanced temporal features of dynamic contrastenhanced (DCE) MRI in HN cancers for predicting LRF. Methods: Our model development consists of training and testing stages. The training stage includes preprocessing of individual-voxel DCE curves from tumors for intensity normalization and temporal alignment, temporal feature extraction from the curves, featuremore » selection, and training classifiers. For feature extraction, multiresolution Haar discrete wavelet transformation is applied to each DCE curve to capture temporal contrast-enhanced features. The wavelet coefficients as feature vectors are selected. Support vector machine classifiers are trained to classify tumor voxels having either low or high BV, for which a BV threshold of 7.6% is previously established and used as ground truth. The model is tested by a new dataset. The voxel-wise DCE curves for training and testing were from 14 and 8 patients, respectively. A posterior probability map of the low BV class was created to examine the tumor sub-volume classification. Voxel-wise classification accuracy was computed to evaluate performance of the model. Results: Average classification accuracies were 87.2% for training (10-fold crossvalidation) and 82.5% for testing. The lowest and highest accuracies (patient-wise) were 68.7% and 96.4%, respectively. Posterior probability maps of the low BV class showed the sub-volumes extracted by our model similar to ones defined by the BV maps with most misclassifications occurred near the sub-volume boundaries. Conclusion: This model could be valuable to support adaptive clinical trials with further validation. The framework could be extendable and scalable to extract temporal contrastenhanced features of DCE-MRI in other tumors. We would like to acknowledge NIH for funding support: UO1 CA183848.« less

  1. Medical diagnosis of atherosclerosis from Carotid Artery Doppler Signals using principal component analysis (PCA), k-NN based weighting pre-processing and Artificial Immune Recognition System (AIRS).

    PubMed

    Latifoğlu, Fatma; Polat, Kemal; Kara, Sadik; Güneş, Salih

    2008-02-01

    In this study, we proposed a new medical diagnosis system based on principal component analysis (PCA), k-NN based weighting pre-processing, and Artificial Immune Recognition System (AIRS) for diagnosis of atherosclerosis from Carotid Artery Doppler Signals. The suggested system consists of four stages. First, in the feature extraction stage, we have obtained the features related with atherosclerosis disease using Fast Fourier Transformation (FFT) modeling and by calculating of maximum frequency envelope of sonograms. Second, in the dimensionality reduction stage, the 61 features of atherosclerosis disease have been reduced to 4 features using PCA. Third, in the pre-processing stage, we have weighted these 4 features using different values of k in a new weighting scheme based on k-NN based weighting pre-processing. Finally, in the classification stage, AIRS classifier has been used to classify subjects as healthy or having atherosclerosis. Hundred percent of classification accuracy has been obtained by the proposed system using 10-fold cross validation. This success shows that the proposed system is a robust and effective system in diagnosis of atherosclerosis disease.

  2. Composite Wavelet Filters for Enhanced Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Chiang, Jeffrey N.; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2012-01-01

    Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low-resolution sonar and camera videos taken from unmanned vehicles. These sonar images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both sonar and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this paper.

  3. Feature Vector Construction Method for IRIS Recognition

    NASA Astrophysics Data System (ADS)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  4. Multi-channel EEG-based sleep stage classification with joint collaborative representation and multiple kernel learning.

    PubMed

    Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui

    2015-10-30

    Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Efficient video-equipped fire detection approach for automatic fire alarm systems

    NASA Astrophysics Data System (ADS)

    Kang, Myeongsu; Tung, Truong Xuan; Kim, Jong-Myon

    2013-01-01

    This paper proposes an efficient four-stage approach that automatically detects fire using video capabilities. In the first stage, an approximate median method is used to detect video frame regions involving motion. In the second stage, a fuzzy c-means-based clustering algorithm is employed to extract candidate regions of fire from all of the movement-containing regions. In the third stage, a gray level co-occurrence matrix is used to extract texture parameters by tracking red-colored objects in the candidate regions. These texture features are, subsequently, used as inputs of a back-propagation neural network to distinguish between fire and nonfire. Experimental results indicate that the proposed four-stage approach outperforms other fire detection algorithms in terms of consistently increasing the accuracy of fire detection in both indoor and outdoor test videos.

  6. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy.

    PubMed

    S K, Somasundaram; P, Alli

    2017-11-09

    The main complication of diabetes is Diabetic retinopathy (DR), retinal vascular disease and it leads to the blindness. Regular screening for early DR disease detection is considered as an intensive labor and resource oriented task. Therefore, automatic detection of DR diseases is performed only by using the computational technique is the great solution. An automatic method is more reliable to determine the presence of an abnormality in Fundus images (FI) but, the classification process is poorly performed. Recently, few research works have been designed for analyzing texture discrimination capacity in FI to distinguish the healthy images. However, the feature extraction (FE) process was not performed well, due to the high dimensionality. Therefore, to identify retinal features for DR disease diagnosis and early detection using Machine Learning and Ensemble Classification method, called, Machine Learning Bagging Ensemble Classifier (ML-BEC) is designed. The ML-BEC method comprises of two stages. The first stage in ML-BEC method comprises extraction of the candidate objects from Retinal Images (RI). The candidate objects or the features for DR disease diagnosis include blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance. These features are initially extracted by applying Machine Learning technique called, t-distributed Stochastic Neighbor Embedding (t-SNE). Besides, t-SNE generates a probability distribution across high-dimensional images where the images are separated into similar and dissimilar pairs. Then, t-SNE describes a similar probability distribution across the points in the low-dimensional map. This lessens the Kullback-Leibler divergence among two distributions regarding the locations of the points on the map. The second stage comprises of application of ensemble classifiers to the extracted features for providing accurate analysis of digital FI using machine learning. In this stage, an automatic detection of DR screening system using Bagging Ensemble Classifier (BEC) is investigated. With the help of voting the process in ML-BEC, bagging minimizes the error due to variance of the base classifier. With the publicly available retinal image databases, our classifier is trained with 25% of RI. Results show that the ensemble classifier can achieve better classification accuracy (CA) than single classification models. Empirical experiments suggest that the machine learning-based ensemble classifier is efficient for further reducing DR classification time (CT).

  7. Automotive System for Remote Surface Classification.

    PubMed

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-04-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.

  8. Automotive System for Remote Surface Classification

    PubMed Central

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-01-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions. PMID:28368297

  9. Hand pose estimation in depth image using CNN and random forest

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Cao, Zhiguo; Xiao, Yang; Fang, Zhiwen

    2018-03-01

    Thanks to the availability of low cost depth cameras, like Microsoft Kinect, 3D hand pose estimation attracted special research attention in these years. Due to the large variations in hand`s viewpoint and the high dimension of hand motion, 3D hand pose estimation is still challenging. In this paper we propose a two-stage framework which joint with CNN and Random Forest to boost the performance of hand pose estimation. First, we use a standard Convolutional Neural Network (CNN) to regress the hand joints` locations. Second, using a Random Forest to refine the joints from the first stage. In the second stage, we propose a pyramid feature which merges the information flow of the CNN. Specifically, we get the rough joints` location from first stage, then rotate the convolutional feature maps (and image). After this, for each joint, we map its location to each feature map (and image) firstly, then crop features at each feature map (and image) around its location, put extracted features to Random Forest to refine at last. Experimentally, we evaluate our proposed method on ICVL dataset and get the mean error about 11mm, our method is also real-time on a desktop.

  10. Can Laws Be a Potential PET Image Texture Analysis Approach for Evaluation of Tumor Heterogeneity and Histopathological Characteristics in NSCLC?

    PubMed

    Karacavus, Seyhan; Yılmaz, Bülent; Tasdemir, Arzu; Kayaaltı, Ömer; Kaya, Eser; İçer, Semra; Ayyıldız, Oguzhan

    2018-04-01

    We investigated the association between the textural features obtained from 18 F-FDG images, metabolic parameters (SUVmax , SUVmean, MTV, TLG), and tumor histopathological characteristics (stage and Ki-67 proliferation index) in non-small cell lung cancer (NSCLC). The FDG-PET images of 67 patients with NSCLC were evaluated. MATLAB technical computing language was employed in the extraction of 137 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run length matrix (GLRLM), and Laws' texture filters. Textural features and metabolic parameters were statistically analyzed in terms of good discrimination power between tumor stages, and selected features/parameters were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). We showed that one textural feature (gray-level nonuniformity, GLN) obtained using GLRLM approach and nine textural features using Laws' approach were successful in discriminating all tumor stages, unlike metabolic parameters. There were significant correlations between Ki-67 index and some of the textural features computed using Laws' method (r = 0.6, p = 0.013). In terms of automatic classification of tumor stage, the accuracy was approximately 84% with k-NN classifier (k = 3) and SVM, using selected five features. Texture analysis of FDG-PET images has a potential to be an objective tool to assess tumor histopathological characteristics. The textural features obtained using Laws' approach could be useful in the discrimination of tumor stage.

  11. Stacked sparse autoencoder in hyperspectral data classification using spectral-spatial, higher order statistics and multifractal spectrum features

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu

    2017-11-01

    This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.

  12. Cognitive and artificial representations in handwriting recognition

    NASA Astrophysics Data System (ADS)

    Lenaghan, Andrew P.; Malyan, Ron

    1996-03-01

    Both cognitive processes and artificial recognition systems may be characterized by the forms of representation they build and manipulate. This paper looks at how handwriting is represented in current recognition systems and the psychological evidence for its representation in the cognitive processes responsible for reading. Empirical psychological work on feature extraction in early visual processing is surveyed to show that a sound psychological basis for feature extraction exists and to describe the features this approach leads to. The first stage of the development of an architecture for a handwriting recognition system which has been strongly influenced by the psychological evidence for the cognitive processes and representations used in early visual processing, is reported. This architecture builds a number of parallel low level feature maps from raw data. These feature maps are thresholded and a region labeling algorithm is used to generate sets of features. Fuzzy logic is used to quantify the uncertainty in the presence of individual features.

  13. Prognostic Value and Reproducibility of Pretreatment CT Texture Features in Stage III Non-Small Cell Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, David V.; Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas; Tucker, Susan L.

    2014-11-15

    Purpose: To determine whether pretreatment CT texture features can improve patient risk stratification beyond conventional prognostic factors (CPFs) in stage III non-small cell lung cancer (NSCLC). Methods and Materials: We retrospectively reviewed 91 cases with stage III NSCLC treated with definitive chemoradiation therapy. All patients underwent pretreatment diagnostic contrast enhanced computed tomography (CE-CT) followed by 4-dimensional CT (4D-CT) for treatment simulation. We used the average-CT and expiratory (T50-CT) images from the 4D-CT along with the CE-CT for texture extraction. Histogram, gradient, co-occurrence, gray tone difference, and filtration-based techniques were used for texture feature extraction. Penalized Cox regression implementing cross-validation wasmore » used for covariate selection and modeling. Models incorporating texture features from the 33 image types and CPFs were compared to those with models incorporating CPFs alone for overall survival (OS), local-regional control (LRC), and freedom from distant metastases (FFDM). Predictive Kaplan-Meier curves were generated using leave-one-out cross-validation. Patients were stratified based on whether their predicted outcome was above or below the median. Reproducibility of texture features was evaluated using test-retest scans from independent patients and quantified using concordance correlation coefficients (CCC). We compared models incorporating the reproducibility seen on test-retest scans to our original models and determined the classification reproducibility. Results: Models incorporating both texture features and CPFs demonstrated a significant improvement in risk stratification compared to models using CPFs alone for OS (P=.046), LRC (P=.01), and FFDM (P=.005). The average CCCs were 0.89, 0.91, and 0.67 for texture features extracted from the average-CT, T50-CT, and CE-CT, respectively. Incorporating reproducibility within our models yielded 80.4% (±3.7% SD), 78.3% (±4.0% SD), and 78.8% (±3.9% SD) classification reproducibility in terms of OS, LRC, and FFDM, respectively. Conclusions: Pretreatment tumor texture may provide prognostic information beyond that obtained from CPFs. Models incorporating feature reproducibility achieved classification rates of ∼80%. External validation would be required to establish texture as a prognostic factor.« less

  14. Testing of a Composite Wavelet Filter to Enhance Automated Target Recognition in SONAR

    NASA Technical Reports Server (NTRS)

    Chiang, Jeffrey N.

    2011-01-01

    Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low resolution SONAR and camera videos taken from Unmanned Underwater Vehicles (UUVs). These SONAR images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both SONAR and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this report.

  15. Scattering features for lung cancer detection in fibered confocal fluorescence microscopy images.

    PubMed

    Rakotomamonjy, Alain; Petitjean, Caroline; Salaün, Mathieu; Thiberville, Luc

    2014-06-01

    To assess the feasibility of lung cancer diagnosis using fibered confocal fluorescence microscopy (FCFM) imaging technique and scattering features for pattern recognition. FCFM imaging technique is a new medical imaging technique for which interest has yet to be established for diagnosis. This paper addresses the problem of lung cancer detection using FCFM images and, as a first contribution, assesses the feasibility of computer-aided diagnosis through these images. Towards this aim, we have built a pattern recognition scheme which involves a feature extraction stage and a classification stage. The second contribution relies on the features used for discrimination. Indeed, we have employed the so-called scattering transform for extracting discriminative features, which are robust to small deformations in the images. We have also compared and combined these features with classical yet powerful features like local binary patterns (LBP) and their variants denoted as local quinary patterns (LQP). We show that scattering features yielded to better recognition performances than classical features like LBP and their LQP variants for the FCFM image classification problems. Another finding is that LBP-based and scattering-based features provide complementary discriminative information and, in some situations, we empirically establish that performance can be improved when jointly using LBP, LQP and scattering features. In this work we analyze the joint capability of FCFM images and scattering features for lung cancer diagnosis. The proposed method achieves a good recognition rate for such a diagnosis problem. It also performs well when used in conjunction with other features for other classical medical imaging classification problems. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    PubMed

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  17. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  18. Comparing the role of shape and texture on staging hepatic fibrosis from medical imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Xuejun; Louie, Ryan; Liu, Brent J.; Gao, Xin; Tan, Xiaomin; Qu, Xianghe; Long, Liling

    2016-03-01

    The purpose of this study is to investigate the role of shape and texture in the classification of hepatic fibrosis by selecting the optimal parameters for a better Computer-aided diagnosis (CAD) system. 10 surface shape features are extracted from a standardized profile of liver; while15 texture features calculated from gray level co-occurrence matrix (GLCM) are extracted within an ROI in liver. Each combination of these input subsets is checked by using support vector machine (SVM) with leave-one-case-out method to differentiate fibrosis into two groups: normal or abnormal. The accurate rate value of all 10/15 types number of features is 66.83% by texture, while 85.74% by shape features, respectively. The irregularity of liver shape can demonstrate fibrotic grade efficiently and texture feature of CT image is not recommended to use with shape feature for interpretation of cirrhosis.

  19. An accurate sleep stages classification system using a new class of optimally time-frequency localized three-band wavelet filter bank.

    PubMed

    Sharma, Manish; Goyal, Deepanshu; Achuth, P V; Acharya, U Rajendra

    2018-07-01

    Sleep related disorder causes diminished quality of lives in human beings. Sleep scoring or sleep staging is the process of classifying various sleep stages which helps to detect the quality of sleep. The identification of sleep-stages using electroencephalogram (EEG) signals is an arduous task. Just by looking at an EEG signal, one cannot determine the sleep stages precisely. Sleep specialists may make errors in identifying sleep stages by visual inspection. To mitigate the erroneous identification and to reduce the burden on doctors, a computer-aided EEG based system can be deployed in the hospitals, which can help identify the sleep stages, correctly. Several automated systems based on the analysis of polysomnographic (PSG) signals have been proposed. A few sleep stage scoring systems using EEG signals have also been proposed. But, still there is a need for a robust and accurate portable system developed using huge dataset. In this study, we have developed a new single-channel EEG based sleep-stages identification system using a novel set of wavelet-based features extracted from a large EEG dataset. We employed a novel three-band time-frequency localized (TBTFL) wavelet filter bank (FB). The EEG signals are decomposed using three-level wavelet decomposition, yielding seven sub-bands (SBs). This is followed by the computation of discriminating features namely, log-energy (LE), signal-fractal-dimensions (SFD), and signal-sample-entropy (SSE) from all seven SBs. The extracted features are ranked and fed to the support vector machine (SVM) and other supervised learning classifiers. In this study, we have considered five different classification problems (CPs), (two-class (CP-1), three-class (CP-2), four-class (CP-3), five-class (CP-4) and six-class (CP-5)). The proposed system yielded accuracies of 98.3%, 93.9%, 92.1%, 91.7%, and 91.5% for CP-1 to CP-5, respectively, using 10-fold cross validation (CV) technique. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. A Featured-Based Strategy for Stereovision Matching in Sensors with Fish-Eye Lenses for Forest Environments

    PubMed Central

    Herrera, Pedro Javier; Pajares, Gonzalo; Guijarro, Maria; Ruz, José J.; Cruz, Jesús M.; Montes, Fernando

    2009-01-01

    This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion. PMID:22303134

  1. A multi-approach feature extractions for iris recognition

    NASA Astrophysics Data System (ADS)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  2. Diagnostic value of sleep stage dissociation as visualized on a 2-dimensional sleep state space in human narcolepsy.

    PubMed

    Olsen, Anders Vinther; Stephansen, Jens; Leary, Eileen; Peppard, Paul E; Sheungshul, Hong; Jennum, Poul Jørgen; Sorensen, Helge; Mignot, Emmanuel

    2017-04-15

    Type 1 narcolepsy (NT1) is characterized by symptoms believed to represent Rapid Eye Movement (REM) sleep stage dissociations, occurrences where features of wake and REM sleep are intermingled, resulting in a mixed state. We hypothesized that sleep stage dissociations can be objectively detected through the analysis of nocturnal Polysomnography (PSG) data, and that those affecting REM sleep can be used as a diagnostic feature for narcolepsy. A Linear Discriminant Analysis (LDA) model using 38 features extracted from EOG, EMG and EEG was used in control subjects to select features differentiating wake, stage N1, N2, N3 and REM sleep. Sleep stage differentiation was next represented in a 2D projection. Features characteristic of sleep stage differences were estimated from the residual sleep stage probability in the 2D space. Using this model we evaluated PSG data from NT1 and non-narcoleptic subjects. An LDA classifier was used to determine the best separation plane. This method replicates the specificity/sensitivity from the training set to the validation set better than many other methods. Eight prominent features could differentiate narcolepsy and controls in the validation dataset. Using a composite measure and a specificity cut off 95% in the training dataset, sensitivity was 43%. Specificity/sensitivity was 94%/38% in the validation set. Using hypersomnia subjects, specificity/sensitivity was 84%/15%. Analyzing treated narcoleptics the specificity/sensitivity was 94%/10%. Sleep stage dissociation can be used for the diagnosis of narcolepsy. However the use of some medications and presence of undiagnosed hypersomnolence patients impacts the result. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Human gait recognition by pyramid of HOG feature on silhouette images

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Park, Jeanrok; Man, Hong

    2013-03-01

    As a uncommon biometric modality, human gait recognition has a great advantage of identify people at a distance without high resolution images. It has attracted much attention in recent years, especially in the fields of computer vision and remote sensing. In this paper, we propose a human gait recognition framework that consists of a reliable background subtraction method followed by the pyramid of Histogram of Gradient (pHOG) feature extraction on the silhouette image, and a Hidden Markov Model (HMM) based classifier. Through background subtraction, the silhouette of human gait in each frame is extracted and normalized from the raw video sequence. After removing the shadow and noise in each region of interest (ROI), pHOG feature is computed on the silhouettes images. Then the pHOG features of each gait class will be used to train a corresponding HMM. In the test stage, pHOG feature will be extracted from each test sequence and used to calculate the posterior probability toward each trained HMM model. Experimental results on the CASIA Gait Dataset B1 demonstrate that with our proposed method can achieve very competitive recognition rate.

  4. Line drawing extraction from gray level images by feature integration

    NASA Astrophysics Data System (ADS)

    Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.

    1994-10-01

    We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).

  5. Detrended fluctuation analysis for major depressive disorder.

    PubMed

    Mumtaz, Wajid; Malik, Aamir Saeed; Ali, Syed Saad Azhar; Yasin, Mohd Azhar Mohd; Amin, Hafeezullah

    2015-01-01

    Clinical utility of Electroencephalography (EEG) based diagnostic studies is less clear for major depressive disorder (MDD). In this paper, a novel machine learning (ML) scheme was presented to discriminate the MDD patients and healthy controls. The proposed method inherently involved feature extraction, selection, classification and validation. The EEG data acquisition involved eyes closed (EC) and eyes open (EO) conditions. At feature extraction stage, the de-trended fluctuation analysis (DFA) was performed, based on the EEG data, to achieve scaling exponents. The DFA was performed to analyzes the presence or absence of long-range temporal correlations (LRTC) in the recorded EEG data. The scaling exponents were used as input features to our proposed system. At feature selection stage, 3 different techniques were used for comparison purposes. Logistic regression (LR) classifier was employed. The method was validated by a 10-fold cross-validation. As results, we have observed that the effect of 3 different reference montages on the computed features. The proposed method employed 3 different types of feature selection techniques for comparison purposes as well. The results show that the DFA analysis performed better in LE data compared with the IR and AR data. In addition, during Wilcoxon ranking, the AR performed better than LE and IR. Based on the results, it was concluded that the DFA provided useful information to discriminate the MDD patients and with further validation can be employed in clinics for diagnosis of MDD.

  6. Improving the performance of univariate control charts for abnormal detection and classification

    NASA Astrophysics Data System (ADS)

    Yiakopoulos, Christos; Koutsoudaki, Maria; Gryllias, Konstantinos; Antoniadis, Ioannis

    2017-03-01

    Bearing failures in rotating machinery can cause machine breakdown and economical loss, if no effective actions are taken on time. Therefore, it is of prime importance to detect accurately the presence of faults, especially at their early stage, to prevent sequent damage and reduce costly downtime. The machinery fault diagnosis follows a roadmap of data acquisition, feature extraction and diagnostic decision making, in which mechanical vibration fault feature extraction is the foundation and the key to obtain an accurate diagnostic result. A challenge in this area is the selection of the most sensitive features for various types of fault, especially when the characteristics of failures are difficult to be extracted. Thus, a plethora of complex data-driven fault diagnosis methods are fed by prominent features, which are extracted and reduced through traditional or modern algorithms. Since most of the available datasets are captured during normal operating conditions, the last decade a number of novelty detection methods, able to work when only normal data are available, have been developed. In this study, a hybrid method combining univariate control charts and a feature extraction scheme is introduced focusing towards an abnormal change detection and classification, under the assumption that measurements under normal operating conditions of the machinery are available. The feature extraction method integrates the morphological operators and the Morlet wavelets. The effectiveness of the proposed methodology is validated on two different experimental cases with bearing faults, demonstrating that the proposed approach can improve the fault detection and classification performance of conventional control charts.

  7. Discriminative Features Mining for Offline Handwritten Signature Verification

    NASA Astrophysics Data System (ADS)

    Neamah, Karrar; Mohamad, Dzulkifli; Saba, Tanzila; Rehman, Amjad

    2014-03-01

    Signature verification is an active research area in the field of pattern recognition. It is employed to identify the particular person with the help of his/her signature's characteristics such as pen pressure, loops shape, speed of writing and up down motion of pen, writing speed, pen pressure, shape of loops, etc. in order to identify that person. However, in the entire process, features extraction and selection stage is of prime importance. Since several signatures have similar strokes, characteristics and sizes. Accordingly, this paper presents combination of orientation of the skeleton and gravity centre point to extract accurate pattern features of signature data in offline signature verification system. Promising results have proved the success of the integration of the two methods.

  8. Enhancement of the Feature Extraction Capability in Global Damage Detection Using Wavelet Theory

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Ponnaluru, Gopi Krishna

    2006-01-01

    The main objective of this study is to assess the specific capabilities of the defect energy parameter technique for global damage detection developed by Saleeb and coworkers. The feature extraction is the most important capability in any damage-detection technique. Features are any parameters extracted from the processed measurement data in order to enhance damage detection. The damage feature extraction capability was studied extensively by analyzing various simulation results. The practical significance in structural health monitoring is that the detection at early stages of small-size defects is always desirable. The amount of changes in the structure's response due to these small defects was determined to show the needed level of accuracy in the experimental methods. The arrangement of fine/extensive sensor network to measure required data for the detection is an "unlimited" ability, but there is a difficulty to place extensive number of sensors on a structure. Therefore, an investigation was conducted using the measurements of coarse sensor network. The white and the pink noises, which cover most of the frequency ranges that are typically encountered in the many measuring devices used (e.g., accelerometers, strain gauges, etc.) are added to the displacements to investigate the effect of noisy measurements in the detection technique. The noisy displacements and the noisy damage parameter values are used to study the signal feature reconstruction using wavelets. The enhancement of the feature extraction capability was successfully achieved by the wavelet theory.

  9. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  10. A new feature extraction method and classification of early stage Parkinsonian rats with and without DBS treatment.

    PubMed

    Iravani, B; Towhidkhah, F; Roghani, M

    2014-12-01

    Parkinson Disease (PD) is one of the most common neural disorders worldwide. Different treatments such as medication and deep brain stimulation (DBS) have been proposed to minimize and control Parkinson's symptoms. DBS has been recognized as an effective approach to decrease most movement disorders of PD. In this study, a new method is proposed for feature extraction and separation of treated and untreated Parkinsonan rats. For this purpose, unilateral intrastriatal 6-hydroxydopamine (6-OHDA, 12.5 μg/5 μl of saline-ascorbate)-lesioned rats were treated with DBS. We performed a behavioral experiment and video tracked traveled trajectories of rats. Then, we investigated the effect of deep brain stimulation of subthalamus nucleus on their behavioral movements. Time, frequency and chaotic features of traveled trajectories were extracted. These features provide the ability to quantify the behavioral movements of Parkinsonian rats. The results showed that the traveled trajectories of untreated were more convoluted with the different time/frequency response. Compared to the traditional features used before to quantify the animals' behavior, the new features improved classification accuracy up to 80 % for untreated and treated rats.

  11. DARHT Multi-intelligence Seismic and Acoustic Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Garrison Nicole; Van Buren, Kendra Lu; Hemez, Francois M.

    The purpose of this report is to document the analysis of seismic and acoustic data collected at the Dual-Axis Radiographic Hydrodynamic Test (DARHT) facility at Los Alamos National Laboratory for robust, multi-intelligence decision making. The data utilized herein is obtained from two tri-axial seismic sensors and three acoustic sensors, resulting in a total of nine data channels. The goal of this analysis is to develop a generalized, automated framework to determine internal operations at DARHT using informative features extracted from measurements collected external of the facility. Our framework involves four components: (1) feature extraction, (2) data fusion, (3) classification, andmore » finally (4) robustness analysis. Two approaches are taken for extracting features from the data. The first of these, generic feature extraction, involves extraction of statistical features from the nine data channels. The second approach, event detection, identifies specific events relevant to traffic entering and leaving the facility as well as explosive activities at DARHT and nearby explosive testing sites. Event detection is completed using a two stage method, first utilizing signatures in the frequency domain to identify outliers and second extracting short duration events of interest among these outliers by evaluating residuals of an autoregressive exogenous time series model. Features extracted from each data set are then fused to perform analysis with a multi-intelligence paradigm, where information from multiple data sets are combined to generate more information than available through analysis of each independently. The fused feature set is used to train a statistical classifier and predict the state of operations to inform a decision maker. We demonstrate this classification using both generic statistical features and event detection and provide a comparison of the two methods. Finally, the concept of decision robustness is presented through a preliminary analysis where uncertainty is added to the system through noise in the measurements.« less

  12. Offline signature verification using convolution Siamese network

    NASA Astrophysics Data System (ADS)

    Xing, Zi-Jian; Yin, Fei; Wu, Yi-Chao; Liu, Cheng-Lin

    2018-04-01

    This paper presents an offline signature verification approach using convolutional Siamese neural network. Unlike the existing methods which consider feature extraction and metric learning as two independent stages, we adopt a deepleaning based framework which combines the two stages together and can be trained end-to-end. The experimental results on two offline public databases (GPDSsynthetic and CEDAR) demonstrate the superiority of our method on the offline signature verification problem.

  13. Retinal Microaneurysms Detection Using Gradient Vector Analysis and Class Imbalance Classification.

    PubMed

    Dai, Baisheng; Wu, Xiangqian; Bu, Wei

    2016-01-01

    Retinal microaneurysms (MAs) are the earliest clinically observable lesions of diabetic retinopathy. Reliable automated MAs detection is thus critical for early diagnosis of diabetic retinopathy. This paper proposes a novel method for the automated MAs detection in color fundus images based on gradient vector analysis and class imbalance classification, which is composed of two stages, i.e. candidate MAs extraction and classification. In the first stage, a candidate MAs extraction algorithm is devised by analyzing the gradient field of the image, in which a multi-scale log condition number map is computed based on the gradient vectors for vessel removal, and then the candidate MAs are localized according to the second order directional derivatives computed in different directions. Due to the complexity of fundus image, besides a small number of true MAs, there are also a large amount of non-MAs in the extracted candidates. Classifying the true MAs and the non-MAs is an extremely class imbalanced classification problem. Therefore, in the second stage, several types of features including geometry, contrast, intensity, edge, texture, region descriptors and other features are extracted from the candidate MAs and a class imbalance classifier, i.e., RUSBoost, is trained for the MAs classification. With the Retinopathy Online Challenge (ROC) criterion, the proposed method achieves an average sensitivity of 0.433 at 1/8, 1/4, 1/2, 1, 2, 4 and 8 false positives per image on the ROC database, which is comparable with the state-of-the-art approaches, and 0.321 on the DiaRetDB1 V2.1 database, which outperforms the state-of-the-art approaches.

  14. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  15. Quantitative nuclear histomorphometry predicts oncotype DX risk categories for early stage ER+ breast cancer.

    PubMed

    Whitney, Jon; Corredor, German; Janowczyk, Andrew; Ganesan, Shridar; Doyle, Scott; Tomaszewski, John; Feldman, Michael; Gilmore, Hannah; Madabhushi, Anant

    2018-05-30

    Gene-expression companion diagnostic tests, such as the Oncotype DX test, assess the risk of early stage Estrogen receptor (ER) positive (+) breast cancers, and guide clinicians in the decision of whether or not to use chemotherapy. However, these tests are typically expensive, time consuming, and tissue-destructive. In this paper, we evaluate the ability of computer-extracted nuclear morphology features from routine hematoxylin and eosin (H&E) stained images of 178 early stage ER+ breast cancer patients to predict corresponding risk categories derived using the Oncotype DX test. A total of 216 features corresponding to the nuclear shape and architecture categories from each of the pathologic images were extracted and four feature selection schemes: Ranksum, Principal Component Analysis with Variable Importance on Projection (PCA-VIP), Maximum-Relevance, Minimum Redundancy Mutual Information Difference (MRMR MID), and Maximum-Relevance, Minimum Redundancy - Mutual Information Quotient (MRMR MIQ), were employed to identify the most discriminating features. These features were employed to train 4 machine learning classifiers: Random Forest, Neural Network, Support Vector Machine, and Linear Discriminant Analysis, via 3-fold cross validation. The four sets of risk categories, and the top Area Under the receiver operating characteristic Curve (AUC) machine classifier performances were: 1) Low ODx and Low mBR grade vs. High ODx and High mBR grade (Low-Low vs. High-High) (AUC = 0.83), 2) Low ODx vs. High ODx (AUC = 0.72), 3) Low ODx vs. Intermediate and High ODx (AUC = 0.58), and 4) Low and Intermediate ODx vs. High ODx (AUC = 0.65). Trained models were tested independent validation set of 53 cases which comprised of Low and High ODx risk, and demonstrated per-patient accuracies ranging from 75 to 86%. Our results suggest that computerized image analysis of digitized H&E pathology images of early stage ER+ breast cancer might be able predict the corresponding Oncotype DX risk categories.

  16. Deep SOMs for automated feature extraction and classification from big data streaming

    NASA Astrophysics Data System (ADS)

    Sakkari, Mohamed; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    In this paper, we proposed a deep self-organizing map model (Deep-SOMs) for automated features extracting and learning from big data streaming which we benefit from the framework Spark for real time streams and highly parallel data processing. The SOMs deep architecture is based on the notion of abstraction (patterns automatically extract from the raw data, from the less to more abstract). The proposed model consists of three hidden self-organizing layers, an input and an output layer. Each layer is made up of a multitude of SOMs, each map only focusing at local headmistress sub-region from the input image. Then, each layer trains the local information to generate more overall information in the higher layer. The proposed Deep-SOMs model is unique in terms of the layers architecture, the SOMs sampling method and learning. During the learning stage we use a set of unsupervised SOMs for feature extraction. We validate the effectiveness of our approach on large data sets such as Leukemia dataset and SRBCT. Results of comparison have shown that the Deep-SOMs model performs better than many existing algorithms for images classification.

  17. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  18. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2004-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, recirculation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; iso-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for (co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  19. WE-EF-210-08: BEST IN PHYSICS (IMAGING): 3D Prostate Segmentation in Ultrasound Images Using Patch-Based Anatomical Feature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Rossi, P; Jani, A

    Purpose: Transrectal ultrasound (TRUS) is the standard imaging modality for the image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy) due to its versatility and real-time capability. Accurate segmentation of the prostate plays a key role in biopsy needle placement, treatment planning, and motion monitoring. As ultrasound images have a relatively low signal-to-noise ratio (SNR), automatic segmentation of the prostate is difficult. However, manual segmentation during biopsy or radiation therapy can be time consuming. We are developing an automated method to address this technical challenge. Methods: The proposed segmentation method consists of two major stages: the training stage and the segmentation stage.more » During the training stage, patch-based anatomical features are extracted from the registered training images with patient-specific information, because these training images have been mapped to the new patient’ images, and the more informative anatomical features are selected to train the kernel support vector machine (KSVM). During the segmentation stage, the selected anatomical features are extracted from newly acquired image as the input of the well-trained KSVM and the output of this trained KSVM is the segmented prostate of this patient. Results: This segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentation. The mean volume Dice Overlap Coefficient was 89.7±2.3%, and the average surface distance was 1.52 ± 0.57 mm between our and manual segmentation, which indicate that the automatic segmentation method works well and could be used for 3D ultrasound-guided prostate intervention. Conclusion: We have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentation (gold standard). This segmentation technique could be a useful tool for image-guided interventions in prostate-cancer diagnosis and treatment. This research is supported in part by DOD PCRP Award W81XWH-13-1-0269, and National Cancer Institute (NCI) Grant CA114313.« less

  20. HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition.

    PubMed

    Lagorce, Xavier; Orchard, Garrick; Galluppi, Francesco; Shi, Bertram E; Benosman, Ryad B

    2017-07-01

    This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.

  1. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve

    NASA Astrophysics Data System (ADS)

    Xu, Lili; Luo, Shuqian

    2010-11-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  2. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve.

    PubMed

    Xu, Lili; Luo, Shuqian

    2010-01-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  3. Low-contrast underwater living fish recognition using PCANet

    NASA Astrophysics Data System (ADS)

    Sun, Xin; Yang, Jianping; Wang, Changgang; Dong, Junyu; Wang, Xinhua

    2018-04-01

    Quantitative and statistical analysis of ocean creatures is critical to ecological and environmental studies. And living fish recognition is one of the most essential requirements for fishery industry. However, light attenuation and scattering phenomenon are present in the underwater environment, which makes underwater images low-contrast and blurry. This paper tries to design a robust framework for accurate fish recognition. The framework introduces a two stage PCA Network to extract abstract features from fish images. On a real-world fish recognition dataset, we use a linear SVM classifier and set penalty coefficients to conquer data unbalanced issue. Feature visualization results show that our method can avoid the feature distortion in boundary regions of underwater image. Experiments results show that the PCA Network can extract discriminate features and achieve promising recognition accuracy. The framework improves the recognition accuracy of underwater living fishes and can be easily applied to marine fishery industry.

  4. Automated system for characterization and classification of malaria-infected stages using light microscopic images of thin blood smears.

    PubMed

    Das, D K; Maiti, A K; Chakraborty, C

    2015-03-01

    In this paper, we propose a comprehensive image characterization cum classification framework for malaria-infected stage detection using microscopic images of thin blood smears. The methodology mainly includes microscopic imaging of Leishman stained blood slides, noise reduction and illumination correction, erythrocyte segmentation, feature selection followed by machine classification. Amongst three-image segmentation algorithms (namely, rule-based, Chan-Vese-based and marker-controlled watershed methods), marker-controlled watershed technique provides better boundary detection of erythrocytes specially in overlapping situations. Microscopic features at intensity, texture and morphology levels are extracted to discriminate infected and noninfected erythrocytes. In order to achieve subgroup of potential features, feature selection techniques, namely, F-statistic and information gain criteria are considered here for ranking. Finally, five different classifiers, namely, Naive Bayes, multilayer perceptron neural network, logistic regression, classification and regression tree (CART), RBF neural network have been trained and tested by 888 erythrocytes (infected and noninfected) for each features' subset. Performance evaluation of the proposed methodology shows that multilayer perceptron network provides higher accuracy for malaria-infected erythrocytes recognition and infected stage classification. Results show that top 90 features ranked by F-statistic (specificity: 98.64%, sensitivity: 100%, PPV: 99.73% and overall accuracy: 96.84%) and top 60 features ranked by information gain provides better results (specificity: 97.29%, sensitivity: 100%, PPV: 99.46% and overall accuracy: 96.73%) for malaria-infected stage classification. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  5. WE-E-17A-05: Complementary Prognostic Value of CT and 18F-FDG PET Non-Small Cell Lung Cancer Tumor Heterogeneity Features Quantified Through Texture Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desseroit, M; Cheze Le Rest, C; Tixier, F

    2014-06-15

    Purpose: Previous studies have shown that CT or 18F-FDG PET intratumor heterogeneity features computed using texture analysis may have prognostic value in Non-Small Cell Lung Cancer (NSCLC), but have been mostly investigated separately. The purpose of this study was to evaluate the potential added value with respect to prognosis regarding the combination of non-enhanced CT and 18F-FDG PET heterogeneity textural features on primary NSCLC tumors. Methods: One hundred patients with non-metastatic NSCLC (stage I–III), treated with surgery and/or (chemo)radiotherapy, that underwent staging 18F-FDG PET/CT images, were retrospectively included. Morphological tumor volumes were semi-automatically delineated on non-enhanced CT using 3D SlicerTM.more » Metabolically active tumor volumes (MATV) were automatically delineated on PET using the Fuzzy Locally Adaptive Bayesian (FLAB) method. Intratumoral tissue density and FDG uptake heterogeneities were quantified using texture parameters calculated from co-occurrence, difference, and run-length matrices. In addition to these textural features, first order histogram-derived metrics were computed on the whole morphological CT tumor volume, as well as on sub-volumes corresponding to fine, medium or coarse textures determined through various levels of LoG-filtering. Association with survival regarding all extracted features was assessed using Cox regression for both univariate and multivariate analysis. Results: Several PET and CT heterogeneity features were prognostic factors of overall survival in the univariate analysis. CT histogram-derived kurtosis and uniformity, as well as Low Grey-level High Run Emphasis (LGHRE), and PET local entropy were independent prognostic factors. Combined with stage and MATV, they led to a powerful prognostic model (p<0.0001), with median survival of 49 vs. 12.6 months and a hazard ratio of 3.5. Conclusion: Intratumoral heterogeneity quantified through textural features extracted from both CT and FDG PET images have complementary and independent prognostic value in NSCLC.« less

  6. A Modified Sparse Representation Method for Facial Expression Recognition.

    PubMed

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  7. Automated diagnosis of coronary artery disease based on data mining and fuzzy modeling.

    PubMed

    Tsipouras, Markos G; Exarchos, Themis P; Fotiadis, Dimitrios I; Kotsia, Anna P; Vakalis, Konstantinos V; Naka, Katerina K; Michalis, Lampros K

    2008-07-01

    A fuzzy rule-based decision support system (DSS) is presented for the diagnosis of coronary artery disease (CAD). The system is automatically generated from an initial annotated dataset, using a four stage methodology: 1) induction of a decision tree from the data; 2) extraction of a set of rules from the decision tree, in disjunctive normal form and formulation of a crisp model; 3) transformation of the crisp set of rules into a fuzzy model; and 4) optimization of the parameters of the fuzzy model. The dataset used for the DSS generation and evaluation consists of 199 subjects, each one characterized by 19 features, including demographic and history data, as well as laboratory examinations. Tenfold cross validation is employed, and the average sensitivity and specificity obtained is 62% and 54%, respectively, using the set of rules extracted from the decision tree (first and second stages), while the average sensitivity and specificity increase to 80% and 65%, respectively, when the fuzzification and optimization stages are used. The system offers several advantages since it is automatically generated, it provides CAD diagnosis based on easily and noninvasively acquired features, and is able to provide interpretation for the decisions made.

  8. A Modified Sparse Representation Method for Facial Expression Recognition

    PubMed Central

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  9. Object-based detection of vehicles using combined optical and elevation data

    NASA Astrophysics Data System (ADS)

    Schilling, Hendrik; Bulatov, Dimitri; Middelmann, Wolfgang

    2018-02-01

    The detection of vehicles is an important and challenging topic that is relevant for many applications. In this work, we present a workflow that utilizes optical and elevation data to detect vehicles in remotely sensed urban data. This workflow consists of three consecutive stages: candidate identification, classification, and single vehicle extraction. Unlike in most previous approaches, fusion of both data sources is strongly pursued at all stages. While the first stage utilizes the fact that most man-made objects are rectangular in shape, the second and third stages employ machine learning techniques combined with specific features. The stages are designed to handle multiple sensor input, which results in a significant improvement. A detailed evaluation shows the benefits of our workflow, which includes hand-tailored features; even in comparison with classification approaches based on Convolutional Neural Networks, which are state of the art in computer vision, we could obtain a comparable or superior performance (F1 score of 0.96-0.94).

  10. Automatic staging of bladder cancer on CT urography

    NASA Astrophysics Data System (ADS)

    Garapati, Sankeerth S.; Hadjiiski, Lubomir M.; Cha, Kenny H.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Weizer, Alon; Alva, Ajjai; Paramagul, Chintana; Wei, Jun; Zhou, Chuan

    2016-03-01

    Correct staging of bladder cancer is crucial for the decision of neoadjuvant chemotherapy treatment and minimizing the risk of under- or over-treatment. Subjectivity and variability of clinicians in utilizing available diagnostic information may lead to inaccuracy in staging bladder cancer. An objective decision support system that merges the information in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate and consistent staging assessments. In this study, we developed a preliminary method to stage bladder cancer. With IRB approval, 42 bladder cancer cases with CTU scans were collected from patient files. The cases were classified into two classes based on pathological stage T2, which is the decision threshold for neoadjuvant chemotherapy treatment (i.e. for stage >=T2) clinically. There were 21 cancers below stage T2 and 21 cancers at stage T2 or above. All 42 lesions were automatically segmented using our auto-initialized cascaded level sets (AI-CALS) method. Morphological features were extracted, which were selected and merged by linear discriminant analysis (LDA) classifier. A leave-one-case-out resampling scheme was used to train and test the classifier using the 42 lesions. The classification accuracy was quantified using the area under the ROC curve (Az). The average training Az was 0.97 and the test Az was 0.85. The classifier consistently selected the lesion volume, a gray level feature and a contrast feature. This predictive model shows promise for assisting in assessing the bladder cancer stage.

  11. Bearing performance degradation assessment based on time-frequency code features and SOM network

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Han, Yan; Deng, Lei

    2017-04-01

    Bearing performance degradation assessment and prognostics are extremely important in supporting maintenance decision and guaranteeing the system’s reliability. To achieve this goal, this paper proposes a novel feature extraction method for the degradation assessment and prognostics of bearings. Features of time-frequency codes (TFCs) are extracted from the time-frequency distribution using a hybrid procedure based on short-time Fourier transform (STFT) and non-negative matrix factorization (NMF) theory. An alternative way to design the health indicator is investigated by quantifying the similarity between feature vectors using a self-organizing map (SOM) network. On the basis of this idea, a new health indicator called time-frequency code quantification error (TFCQE) is proposed to assess the performance degradation of the bearing. This indicator is constructed based on the bearing real-time behavior and the SOM model that is previously trained with only the TFC vectors under the normal condition. Vibration signals collected from the bearing run-to-failure tests are used to validate the developed method. The comparison results demonstrate the superiority of the proposed TFCQE indicator over many other traditional features in terms of feature quality metrics, incipient degradation identification and achieving accurate prediction. Highlights • Time-frequency codes are extracted to reflect the signals’ characteristics. • SOM network served as a tool to quantify the similarity between feature vectors. • A new health indicator is proposed to demonstrate the whole stage of degradation development. • The method is useful for extracting the degradation features and detecting the incipient degradation. • The superiority of the proposed method is verified using experimental data.

  12. The effects of TIS and MI on the texture features in ultrasonic fatty liver images

    NASA Astrophysics Data System (ADS)

    Zhao, Yuan; Cheng, Xinyao; Ding, Mingyue

    2017-03-01

    Nonalcoholic fatty liver disease (NAFLD) is prevalent and has a worldwide distribution now. Although ultrasound imaging technology has been deemed as the common method to diagnose fatty liver, it is not able to detect NAFLD in its early stage and limited by the diagnostic instruments and some other factors. B-scan image feature extraction of fatty liver can assist doctor to analyze the patient's situation and enhance the efficiency and accuracy of clinical diagnoses. However, some uncertain factors in ultrasonic diagnoses are often been ignored during feature extraction. In this study, the nonalcoholic fatty liver rabbit model was made and its liver ultrasound images were collected by setting different Thermal index of soft tissue (TIS) and mechanical index (MI). Then, texture features were calculated based on gray level co-occurrence matrix (GLCM) and the impacts of TIS and MI on these features were analyzed and discussed. Furthermore, the receiver operating characteristic (ROC) curve was used to evaluate whether each feature was effective or not when TIS and MI were given. The results showed that TIS and MI do affect the features extracted from the healthy liver, while the texture features of fatty liver are relatively stable. In addition, TIS set to 0.3 and MI equal to 0.9 might be a better choice when using a computer aided diagnosis (CAD) method for fatty liver recognition.

  13. Face recognition system using multiple face model of hybrid Fourier feature under uncontrolled illumination variation.

    PubMed

    Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo

    2011-04-01

    The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.

  14. Spectral feature extraction of EEG signals and pattern recognition during mental tasks of 2-D cursor movements for BCI using SVM and ANN.

    PubMed

    Bascil, M Serdar; Tesneli, Ahmet Y; Temurtas, Feyzullah

    2016-09-01

    Brain computer interface (BCI) is a new communication way between man and machine. It identifies mental task patterns stored in electroencephalogram (EEG). So, it extracts brain electrical activities recorded by EEG and transforms them machine control commands. The main goal of BCI is to make available assistive environmental devices for paralyzed people such as computers and makes their life easier. This study deals with feature extraction and mental task pattern recognition on 2-D cursor control from EEG as offline analysis approach. The hemispherical power density changes are computed and compared on alpha-beta frequency bands with only mental imagination of cursor movements. First of all, power spectral density (PSD) features of EEG signals are extracted and high dimensional data reduced by principle component analysis (PCA) and independent component analysis (ICA) which are statistical algorithms. In the last stage, all features are classified with two types of support vector machine (SVM) which are linear and least squares (LS-SVM) and three different artificial neural network (ANN) structures which are learning vector quantization (LVQ), multilayer neural network (MLNN) and probabilistic neural network (PNN) and mental task patterns are successfully identified via k-fold cross validation technique.

  15. Automated detection of lung nodules with three-dimensional convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Pérez, Gustavo; Arbeláez, Pablo

    2017-11-01

    Lung cancer is the cancer type with highest mortality rate worldwide. It has been shown that early detection with computer tomography (CT) scans can reduce deaths caused by this disease. Manual detection of cancer nodules is costly and time-consuming. We present a general framework for the detection of nodules in lung CT images. Our method consists of the pre-processing of a patient's CT with filtering and lung extraction from the entire volume using a previously calculated mask for each patient. From the extracted lungs, we perform a candidate generation stage using morphological operations, followed by the training of a three-dimensional convolutional neural network for feature representation and classification of extracted candidates for false positive reduction. We perform experiments on the publicly available LIDC-IDRI dataset. Our candidate extraction approach is effective to produce precise candidates with a recall of 99.6%. In addition, false positive reduction stage manages to successfully classify candidates and increases precision by a factor of 7.000.

  16. Recognition and classification of colon cells applying the ensemble of classifiers.

    PubMed

    Kruk, M; Osowski, S; Koktysz, R

    2009-02-01

    The paper presents the application of an ensemble of classifiers for the recognition of colon cells on the basis of the microscope colon image. The solved task include: segmentation of the individual cells from the image using the morphological operations, the preprocessing stages, leading to the extraction of features, selection of the most important features, and the classification stage applying the classifiers arranged in the form of ensemble. The paper presents and discusses the results concerning the recognition of four most important colon cell types: eosinophylic granulocyte, neutrophilic granulocyte, lymphocyte and plasmocyte. The proposed system is able to recognize the cells with the accuracy comparable to the human expert (around 5% of discrepancy of both results).

  17. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.

  18. Paroxysmal atrial fibrillation prediction method with shorter HRV sequences.

    PubMed

    Boon, K H; Khalil-Hani, M; Malarvili, M B; Sia, C W

    2016-10-01

    This paper proposes a method that predicts the onset of paroxysmal atrial fibrillation (PAF), using heart rate variability (HRV) segments that are shorter than those applied in existing methods, while maintaining good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to stabilize (electrically) and prevent the onset of atrial arrhythmias with different pacing techniques. We investigate the effect of HRV features extracted from different lengths of HRV segments prior to PAF onset with the proposed PAF prediction method. The pre-processing stage of the predictor includes QRS detection, HRV quantification and ectopic beat correction. Time-domain, frequency-domain, non-linear and bispectrum features are then extracted from the quantified HRV. In the feature selection, the HRV feature set and classifier parameters are optimized simultaneously using an optimization procedure based on genetic algorithm (GA). Both full feature set and statistically significant feature subset are optimized by GA respectively. For the statistically significant feature subset, Mann-Whitney U test is used to filter non-statistical significance features that cannot pass the statistical test at 20% significant level. The final stage of our predictor is the classifier that is based on support vector machine (SVM). A 10-fold cross-validation is applied in performance evaluation, and the proposed method achieves 79.3% prediction accuracy using 15-minutes HRV segment. This accuracy is comparable to that achieved by existing methods that use 30-minutes HRV segments, most of which achieves accuracy of around 80%. More importantly, our method significantly outperforms those that applied segments shorter than 30 minutes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Radiomics Evaluation of Histological Heterogeneity Using Multiscale Textures Derived From 3D Wavelet Transformation of Multispectral Images.

    PubMed

    Chaddad, Ahmad; Daniel, Paul; Niazi, Tamim

    2018-01-01

    Colorectal cancer (CRC) is markedly heterogeneous and develops progressively toward malignancy through several stages which include stroma (ST), benign hyperplasia (BH), intraepithelial neoplasia (IN) or precursor cancerous lesion, and carcinoma (CA). Identification of the malignancy stage of CRC pathology tissues (PT) allows the most appropriate therapeutic intervention. This study investigates multiscale texture features extracted from CRC pathology sections using 3D wavelet transform (3D-WT) filter. Multiscale features were extracted from digital whole slide images of 39 patients that were segmented in a pre-processing step using an active contour model. The capacity for multiscale texture to compare and classify between PTs was investigated using ANOVA significance test and random forest classifier models, respectively. 12 significant features derived from the multiscale texture (i.e., variance, entropy, and energy) were found to discriminate between CRC grades at a significance value of p  < 0.01 after correction. Combining multiscale texture features lead to a better predictive capacity compared to prediction models based on individual scale features with an average (±SD) classification accuracy of 93.33 (±3.52)%, sensitivity of 88.33 (± 4.12)%, and specificity of 96.89 (± 3.88)%. Entropy was found to be the best classifier feature across all the PT grades with an average of the area under the curve (AUC) value of 91.17, 94.21, 97.70, 100% for ST, BH, IN, and CA, respectively. Our results suggest that multiscale texture features based on 3D-WT are sensitive enough to discriminate between CRC grades with the entropy feature, the best predictor of pathology grade.

  20. Digital auscultation analysis for heart murmur detection.

    PubMed

    Delgado-Trejos, Edilson; Quiceno-Manrique, A F; Godino-Llorente, J I; Blanco-Velasco, M; Castellanos-Dominguez, G

    2009-02-01

    This work presents a comparison of different approaches for the detection of murmurs from phonocardiographic signals. Taking into account the variability of the phonocardiographic signals induced by valve disorders, three families of features were analyzed: (a) time-varying & time-frequency features; (b) perceptual; and (c) fractal features. With the aim of improving the performance of the system, the accuracy of the system was tested using several combinations of the aforementioned families of parameters. In the second stage, the main components extracted from each family were combined together with the goal of improving the accuracy of the system. The contribution of each family of features extracted was evaluated by means of a simple k-nearest neighbors classifier, showing that fractal features provide the best accuracy (97.17%), followed by time-varying & time-frequency (95.28%), and perceptual features (88.7%). However, an accuracy around 94% can be reached just by using the two main features of the fractal family; therefore, considering the difficulties related to the automatic intrabeat segmentation needed for spectral and perceptual features, this scheme becomes an interesting alternative. The conclusion is that fractal type features were the most robust family of parameters (in the sense of accuracy vs. computational load) for the automatic detection of murmurs. This work was carried out using a database that contains 164 phonocardiographic recordings (81 normal and 83 records with murmurs). The database was segmented to extract 360 representative individual beats (180 per class).

  1. Auditing SNOMED CT hierarchical relations based on lexical features of concepts in non-lattice subgraphs.

    PubMed

    Cui, Licong; Bodenreider, Olivier; Shi, Jay; Zhang, Guo-Qiang

    2018-02-01

    We introduce a structural-lexical approach for auditing SNOMED CT using a combination of non-lattice subgraphs of the underlying hierarchical relations and enriched lexical attributes of fully specified concept names. Our goal is to develop a scalable and effective approach that automatically identifies missing hierarchical IS-A relations. Our approach involves 3 stages. In stage 1, all non-lattice subgraphs of SNOMED CT's IS-A hierarchical relations are extracted. In stage 2, lexical attributes of fully-specified concept names in such non-lattice subgraphs are extracted. For each concept in a non-lattice subgraph, we enrich its set of attributes with attributes from its ancestor concepts within the non-lattice subgraph. In stage 3, subset inclusion relations between the lexical attribute sets of each pair of concepts in each non-lattice subgraph are compared to existing IS-A relations in SNOMED CT. For concept pairs within each non-lattice subgraph, if a subset relation is identified but an IS-A relation is not present in SNOMED CT IS-A transitive closure, then a missing IS-A relation is reported. The September 2017 release of SNOMED CT (US edition) was used in this investigation. A total of 14,380 non-lattice subgraphs were extracted, from which we suggested a total of 41,357 missing IS-A relations. For evaluation purposes, 200 non-lattice subgraphs were randomly selected from 996 smaller subgraphs (of size 4, 5, or 6) within the "Clinical Finding" and "Procedure" sub-hierarchies. Two domain experts confirmed 185 (among 223) suggested missing IS-A relations, a precision of 82.96%. Our results demonstrate that analyzing the lexical features of concepts in non-lattice subgraphs is an effective approach for auditing SNOMED CT. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    PubMed

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the individual subjects, therefore, it can be used as a significant tool in clinical practice.

  3. Prosodic Encoding in Silent Reading.

    ERIC Educational Resources Information Center

    Wilkenfeld, Deborah

    In silent reading, short-memory tasks, such as semantic and syntactic processing, require a stage of phonetic encoding between visual representation and the actual extraction of meaning, and this encoding includes prosodic as well as segmental features. To test for this suprasegmental coding, an experiment was conducted in which subjects were…

  4. Feasibility of feature-based indexing, clustering, and search of clinical trials: A case study of breast cancer trials from ClinicalTrials.gov

    PubMed Central

    Boland, Mary Regina; Miotto, Riccardo; Gao, Junfeng; Weng, Chunhua

    2013-01-01

    Summary Background When standard therapies fail, clinical trials provide experimental treatment opportunities for patients with drug-resistant illnesses or terminal diseases. Clinical Trials can also provide free treatment and education for individuals who otherwise may not have access to such care. To find relevant clinical trials, patients often search online; however, they often encounter a significant barrier due to the large number of trials and in-effective indexing methods for reducing the trial search space. Objectives This study explores the feasibility of feature-based indexing, clustering, and search of clinical trials and informs designs to automate these processes. Methods We decomposed 80 randomly selected stage III breast cancer clinical trials into a vector of eligibility features, which were organized into a hierarchy. We clustered trials based on their eligibility feature similarities. In a simulated search process, manually selected features were used to generate specific eligibility questions to filter trials iteratively. Results We extracted 1,437 distinct eligibility features and achieved an inter-rater agreement of 0.73 for feature extraction for 37 frequent features occurring in more than 20 trials. Using all the 1,437 features we stratified the 80 trials into six clusters containing trials recruiting similar patients by patient-characteristic features, five clusters by disease-characteristic features, and two clusters by mixed features. Most of the features were mapped to one or more Unified Medical Language System (UMLS) concepts, demonstrating the utility of named entity recognition prior to mapping with the UMLS for automatic feature extraction. Conclusions It is feasible to develop feature-based indexing and clustering methods for clinical trials to identify trials with similar target populations and to improve trial search efficiency. PMID:23666475

  5. Feasibility of feature-based indexing, clustering, and search of clinical trials. A case study of breast cancer trials from ClinicalTrials.gov.

    PubMed

    Boland, M R; Miotto, R; Gao, J; Weng, C

    2013-01-01

    When standard therapies fail, clinical trials provide experimental treatment opportunities for patients with drug-resistant illnesses or terminal diseases. Clinical Trials can also provide free treatment and education for individuals who otherwise may not have access to such care. To find relevant clinical trials, patients often search online; however, they often encounter a significant barrier due to the large number of trials and in-effective indexing methods for reducing the trial search space. This study explores the feasibility of feature-based indexing, clustering, and search of clinical trials and informs designs to automate these processes. We decomposed 80 randomly selected stage III breast cancer clinical trials into a vector of eligibility features, which were organized into a hierarchy. We clustered trials based on their eligibility feature similarities. In a simulated search process, manually selected features were used to generate specific eligibility questions to filter trials iteratively. We extracted 1,437 distinct eligibility features and achieved an inter-rater agreement of 0.73 for feature extraction for 37 frequent features occurring in more than 20 trials. Using all the 1,437 features we stratified the 80 trials into six clusters containing trials recruiting similar patients by patient-characteristic features, five clusters by disease-characteristic features, and two clusters by mixed features. Most of the features were mapped to one or more Unified Medical Language System (UMLS) concepts, demonstrating the utility of named entity recognition prior to mapping with the UMLS for automatic feature extraction. It is feasible to develop feature-based indexing and clustering methods for clinical trials to identify trials with similar target populations and to improve trial search efficiency.

  6. An image-based automatic recognition method for the flowering stage of maize

    NASA Astrophysics Data System (ADS)

    Yu, Zhenghong; Zhou, Huabing; Li, Cuina

    2018-03-01

    In this paper, we proposed an image-based approach for automatic recognizing the flowering stage of maize. A modified HOG/SVM detection framework is first adopted to detect the ears of maize. Then, we use low-rank matrix recovery technology to precisely extract the ears at pixel level. At last, a new feature called color gradient histogram, as an indicator, is proposed to determine the flowering stage. Comparing experiment has been carried out to testify the validity of our method and the results indicate that our method can meet the demand for practical observation.

  7. Compression of deep convolutional neural network for computer-aided diagnosis of masses in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Richter, Caleb; Cha, Kenny

    2018-02-01

    Deep-learning models are highly parameterized, causing difficulty in inference and transfer learning. We propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in DBT while maintaining the classification accuracy. Two-stage transfer learning was used to adapt the ImageNet-trained DCNN to mammography and then to DBT. In the first-stage transfer learning, transfer learning from ImageNet trained DCNN was performed using mammography data. In the second-stage transfer learning, the mammography-trained DCNN was trained on the DBT data using feature extraction from fully connected layer, recursive feature elimination and random forest classification. The layered pathway evolution encapsulates the feature extraction to the classification stages to compress the DCNN. Genetic algorithm was used in an iterative approach with tournament selection driven by count-preserving crossover and mutation to identify the necessary nodes in each convolution layer while eliminating the redundant nodes. The DCNN was reduced by 99% in the number of parameters and 95% in mathematical operations in the convolutional layers. The lesion-based area under the receiver operating characteristic curve on an independent DBT test set from the original and the compressed network resulted in 0.88+/-0.05 and 0.90+/-0.04, respectively. The difference did not reach statistical significance. We demonstrated a DCNN compression approach without additional fine-tuning or loss of performance for classification of masses in DBT. The approach can be extended to other DCNNs and transfer learning tasks. An ensemble of these smaller and focused DCNNs has the potential to be used in multi-target transfer learning.

  8. Classification of SD-OCT volumes for DME detection: an anomaly detection approach

    NASA Astrophysics Data System (ADS)

    Sankar, S.; Sidibé, D.; Cheung, Y.; Wong, T. Y.; Lamoureux, E.; Milea, D.; Meriaudeau, F.

    2016-03-01

    Diabetic Macular Edema (DME) is the leading cause of blindness amongst diabetic patients worldwide. It is characterized by accumulation of water molecules in the macula leading to swelling. Early detection of the disease helps prevent further loss of vision. Naturally, automated detection of DME from Optical Coherence Tomography (OCT) volumes plays a key role. To this end, a pipeline for detecting DME diseases in OCT volumes is proposed in this paper. The method is based on anomaly detection using Gaussian Mixture Model (GMM). It starts with pre-processing the B-scans by resizing, flattening, filtering and extracting features from them. Both intensity and Local Binary Pattern (LBP) features are considered. The dimensionality of the extracted features is reduced using PCA. As the last stage, a GMM is fitted with features from normal volumes. During testing, features extracted from the test volume are evaluated with the fitted model for anomaly and classification is made based on the number of B-scans detected as outliers. The proposed method is tested on two OCT datasets achieving a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, experiments show that the proposed method achieves better classification performances than other recently published works.

  9. Simultaneous Local Binary Feature Learning and Encoding for Homogeneous and Heterogeneous Face Recognition.

    PubMed

    Lu, Jiwen; Erin Liong, Venice; Zhou, Jie

    2017-08-09

    In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.

  10. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier

    NASA Astrophysics Data System (ADS)

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task.

  11. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier.

    PubMed

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  12. Neuroanatomic organization of sound memory in humans.

    PubMed

    Kraut, Michael A; Pitcock, Jeffery A; Calhoun, Vince; Li, Juan; Freeman, Thomas; Hart, John

    2006-11-01

    The neural interface between sensory perception and memory is a central issue in neuroscience, particularly initial memory organization following perceptual analyses. We used functional magnetic resonance imaging to identify anatomic regions extracting initial auditory semantic memory information related to environmental sounds. Two distinct anatomic foci were detected in the right superior temporal gyrus when subjects identified sounds representing either animals or threatening items. Threatening animal stimuli elicited signal changes in both foci, suggesting a distributed neural representation. Our results demonstrate both category- and feature-specific responses to nonverbal sounds in early stages of extracting semantic memory information from these sounds. This organization allows for these category-feature detection nodes to extract early, semantic memory information for efficient processing of transient sound stimuli. Neural regions selective for threatening sounds are similar to those of nonhuman primates, demonstrating semantic memory organization for basic biological/survival primitives are present across species.

  13. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    PubMed

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  14. Textural features of pretreatment 18F-FDG PET/CT images: prognostic significance in patients with advanced T-stage oropharyngeal squamous cell carcinoma.

    PubMed

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Chang, Joseph Tung-Chieh; Huang, Chung-Guei; Tsan, Din-Li; Ng, Shu-Hang; Wang, Hung-Ming; Lin, Chien-Yu; Liao, Chun-Ta; Yen, Tzu-Chen

    2013-10-01

    Previous studies have shown that total lesion glycolysis (TLG) may serve as a prognostic indicator in oropharyngeal squamous cell carcinoma (OPSCC). We sought to investigate whether the textural features of pretreatment (18)F-FDG PET/CT images can provide any additional prognostic information over TLG and clinical staging in patients with advanced T-stage OPSCC. We retrospectively analyzed the pretreatment (18)F-FDG PET/CT images of 70 patients with advanced T-stage OPSCC who had completed concurrent chemoradiotherapy, bioradiotherapy, or radiotherapy with curative intent. All of the patients had data on human papillomavirus (HPV) infection and were followed up for at least 24 mo or until death. A standardized uptake value (SUV) of 2.5 was taken as a cutoff for tumor boundary. The textural features of pretreatment (18)F-FDG PET/CT images were extracted from histogram analysis (SUV variance and SUV entropy), normalized gray-level cooccurrence matrix (uniformity, entropy, dissimilarity, contrast, homogeneity, inverse different moment, and correlation), and neighborhood gray-tone difference matrix (coarseness, contrast, busyness, complexity, and strength). Receiver-operating-characteristic curves were used to identify the optimal cutoff values for the textural features and TLG. Thirteen patients were HPV-positive. Multivariate Cox regression analysis showed that age, tumor TLG, and uniformity were independently associated with progression-free survival (PFS) and disease-specific survival (DSS). TLG, uniformity, and HPV positivity were significantly associated with overall survival (OS). A prognostic scoring system based on TLG and uniformity was derived. Patients who presented with TLG > 121.9 g and uniformity ≤ 0.138 experienced significantly worse PFS, DSS, and OS rates than those without (P < 0.001, < 0.001, and 0.002, respectively). Patients with TLG > 121.9 g or uniformity ≤ 0.138 were further divided according to age, and different PFS and DSS were observed. Uniformity extracted from the normalized gray-level cooccurrence matrix represents an independent prognostic predictor in patients with advanced T-stage OPSCC. A scoring system was developed and may serve as a risk-stratification strategy for guiding therapy.

  15. Automatic Detection of Galaxy Type From Datasets of Galaxies Image Based on Image Retrieval Approach.

    PubMed

    Abd El Aziz, Mohamed; Selim, I M; Xiong, Shengwu

    2017-06-30

    This paper presents a new approach for the automatic detection of galaxy morphology from datasets based on an image-retrieval approach. Currently, there are several classification methods proposed to detect galaxy types within an image. However, in some situations, the aim is not only to determine the type of galaxy within the queried image, but also to determine the most similar images for query image. Therefore, this paper proposes an image-retrieval method to detect the type of galaxies within an image and return with the most similar image. The proposed method consists of two stages, in the first stage, a set of features is extracted based on shape, color and texture descriptors, then a binary sine cosine algorithm selects the most relevant features. In the second stage, the similarity between the features of the queried galaxy image and the features of other galaxy images is computed. Our experiments were performed using the EFIGI catalogue, which contains about 5000 galaxies images with different types (edge-on spiral, spiral, elliptical and irregular). We demonstrate that our proposed approach has better performance compared with the particle swarm optimization (PSO) and genetic algorithm (GA) methods.

  16. Foundation and methodologies in computer-aided diagnosis systems for breast cancer detection.

    PubMed

    Jalalian, Afsaneh; Mashohor, Syamsiah; Mahmud, Rozi; Karasfi, Babak; Saripan, M Iqbal B; Ramli, Abdul Rahman B

    2017-01-01

    Breast cancer is the most prevalent cancer that affects women all over the world. Early detection and treatment of breast cancer could decline the mortality rate. Some issues such as technical reasons, which related to imaging quality and human error, increase misdiagnosis of breast cancer by radiologists. Computer-aided detection systems (CADs) are developed to overcome these restrictions and have been studied in many imaging modalities for breast cancer detection in recent years. The CAD systems improve radiologists' performance in finding and discriminating between the normal and abnormal tissues. These procedures are performed only as a double reader but the absolute decisions are still made by the radiologist. In this study, the recent CAD systems for breast cancer detection on different modalities such as mammography, ultrasound, MRI, and biopsy histopathological images are introduced. The foundation of CAD systems generally consist of four stages: Pre-processing, Segmentation, Feature extraction, and Classification. The approaches which applied to design different stages of CAD system are summarised. Advantages and disadvantages of different segmentation, feature extraction and classification techniques are listed. In addition, the impact of imbalanced datasets in classification outcomes and appropriate methods to solve these issues are discussed. As well as, performance evaluation metrics for various stages of breast cancer detection CAD systems are reviewed.

  17. Lymphoma diagnosis in histopathology using a multi-stage visual learning approach

    NASA Astrophysics Data System (ADS)

    Codella, Noel; Moradi, Mehdi; Matasar, Matt; Sveda-Mahmood, Tanveer; Smith, John R.

    2016-03-01

    This work evaluates the performance of a multi-stage image enhancement, segmentation, and classification approach for lymphoma recognition in hematoxylin and eosin (H and E) stained histopathology slides of excised human lymph node tissue. In the first stage, the original histology slide undergoes various image enhancement and segmentation operations, creating an additional 5 images for every slide. These new images emphasize unique aspects of the original slide, including dominant staining, staining segmentations, non-cellular groupings, and cellular groupings. For the resulting 6 total images, a collection of visual features are extracted from 3 different spatial configurations. Visual features include the first fully connected layer (4096 dimensions) of the Caffe convolutional neural network trained from ImageNet data. In total, over 200 resultant visual descriptors are extracted for each slide. Non-linear SVMs are trained over each of the over 200 descriptors, which are then input to a forward stepwise ensemble selection that optimizes a late fusion sum of logistically normalized model outputs using local hill climbing. The approach is evaluated on a public NIH dataset containing 374 images representing 3 lymphoma conditions: chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL). Results demonstrate a 38.4% reduction in residual error over the current state-of-art on this dataset.

  18. Foundation and methodologies in computer-aided diagnosis systems for breast cancer detection

    PubMed Central

    Jalalian, Afsaneh; Mashohor, Syamsiah; Mahmud, Rozi; Karasfi, Babak; Saripan, M. Iqbal B.; Ramli, Abdul Rahman B.

    2017-01-01

    Breast cancer is the most prevalent cancer that affects women all over the world. Early detection and treatment of breast cancer could decline the mortality rate. Some issues such as technical reasons, which related to imaging quality and human error, increase misdiagnosis of breast cancer by radiologists. Computer-aided detection systems (CADs) are developed to overcome these restrictions and have been studied in many imaging modalities for breast cancer detection in recent years. The CAD systems improve radiologists' performance in finding and discriminating between the normal and abnormal tissues. These procedures are performed only as a double reader but the absolute decisions are still made by the radiologist. In this study, the recent CAD systems for breast cancer detection on different modalities such as mammography, ultrasound, MRI, and biopsy histopathological images are introduced. The foundation of CAD systems generally consist of four stages: Pre-processing, Segmentation, Feature extraction, and Classification. The approaches which applied to design different stages of CAD system are summarised. Advantages and disadvantages of different segmentation, feature extraction and classification techniques are listed. In addition, the impact of imbalanced datasets in classification outcomes and appropriate methods to solve these issues are discussed. As well as, performance evaluation metrics for various stages of breast cancer detection CAD systems are reviewed. PMID:28435432

  19. Time frequency analysis for automated sleep stage identification in fullterm and preterm neonates.

    PubMed

    Fraiwan, Luay; Lweesy, Khaldon; Khasawneh, Natheer; Fraiwan, Mohammad; Wenz, Heinrich; Dickhaus, Hartmut

    2011-08-01

    This work presents a new methodology for automated sleep stage identification in neonates based on the time frequency distribution of single electroencephalogram (EEG) recording and artificial neural networks (ANN). Wigner-Ville distribution (WVD), Hilbert-Hough spectrum (HHS) and continuous wavelet transform (CWT) time frequency distributions were used to represent the EEG signal from which features were extracted using time frequency entropy. The classification of features was done using feed forward back-propagation ANN. The system was trained and tested using data taken from neonates of post-conceptual age of 40 weeks for both preterm (14 recordings) and fullterm (15 recordings). The identification of sleep stages was successfully implemented and the classification based on the WVD outperformed the approaches based on CWT and HHS. The accuracy and kappa coefficient were found to be 0.84 and 0.65 respectively for the fullterm neonates' recordings and 0.74 and 0.50 respectively for preterm neonates' recordings.

  20. Atmosphere-based image classification through luminance and hue

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Zhang, Yujin

    2005-07-01

    In this paper a novel image classification system is proposed. Atmosphere serves an important role in generating the scene"s topic or in conveying the message behind the scene"s story, which belongs to abstract attribute level in semantic levels. At first, five atmosphere semantic categories are defined according to rules of photo and film grammar, followed by global luminance and hue features. Then the hierarchical SVM classifiers are applied. In each classification stage, corresponding features are extracted and the trained linear SVM is implemented, resulting in two classes. After three stages of classification, five atmosphere categories are obtained. At last, the text annotation of the atmosphere semantics and the corresponding features by Extensible Markup Language (XML) in MPEG-7 is defined, which can be integrated into more multimedia applications (such as searching, indexing and accessing of multimedia content). The experiment is performed on Corel images and film frames. The classification results prove the effectiveness of the definition of atmosphere semantic classes and the corresponding features.

  1. An alternative respiratory sounds classification system utilizing artificial neural networks.

    PubMed

    Oweis, Rami J; Abdulhay, Enas W; Khayal, Amer; Awad, Areen

    2015-01-01

    Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFIS) toolboxes. The methods have been applied to 10 different respiratory sounds for classification. The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  2. Thermal-to-visible face recognition using partial least squares.

    PubMed

    Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson

    2015-03-01

    Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.

  3. Machine learning approach for automated screening of malaria parasite using light microscopic images.

    PubMed

    Das, Dev Kumar; Ghosh, Madhumala; Pal, Mallika; Maiti, Asok K; Chakraborty, Chandan

    2013-02-01

    The aim of this paper is to address the development of computer assisted malaria parasite characterization and classification using machine learning approach based on light microscopic images of peripheral blood smears. In doing this, microscopic image acquisition from stained slides, illumination correction and noise reduction, erythrocyte segmentation, feature extraction, feature selection and finally classification of different stages of malaria (Plasmodium vivax and Plasmodium falciparum) have been investigated. The erythrocytes are segmented using marker controlled watershed transformation and subsequently total ninety six features describing shape-size and texture of erythrocytes are extracted in respect to the parasitemia infected versus non-infected cells. Ninety four features are found to be statistically significant in discriminating six classes. Here a feature selection-cum-classification scheme has been devised by combining F-statistic, statistical learning techniques i.e., Bayesian learning and support vector machine (SVM) in order to provide the higher classification accuracy using best set of discriminating features. Results show that Bayesian approach provides the highest accuracy i.e., 84% for malaria classification by selecting 19 most significant features while SVM provides highest accuracy i.e., 83.5% with 9 most significant features. Finally, the performance of these two classifiers under feature selection framework has been compared toward malaria parasite classification. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Real-time traffic sign detection and recognition

    NASA Astrophysics Data System (ADS)

    Herbschleb, Ernst; de With, Peter H. N.

    2009-01-01

    The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput

  5. Multi range spectral feature fitting for hyperspectral imagery in extracting oilseed rape planting area

    NASA Astrophysics Data System (ADS)

    Pan, Zhuokun; Huang, Jingfeng; Wang, Fumin

    2013-12-01

    Spectral feature fitting (SFF) is a commonly used strategy for hyperspectral imagery analysis to discriminate ground targets. Compared to other image analysis techniques, SFF does not secure higher accuracy in extracting image information in all circumstances. Multi range spectral feature fitting (MRSFF) from ENVI software allows user to focus on those interesting spectral features to yield better performance. Thus spectral wavelength ranges and their corresponding weights must be determined. The purpose of this article is to demonstrate the performance of MRSFF in oilseed rape planting area extraction. A practical method for defining the weighted values, the variance coefficient weight method, was proposed to set up criterion. Oilseed rape field canopy spectra from the whole growth stage were collected prior to investigating its phenological varieties; oilseed rape endmember spectra were extracted from the Hyperion image as identifying samples to be used in analyzing the oilseed rape field. Wavelength range divisions were determined by the difference between field-measured spectra and image spectra, and image spectral variance coefficient weights for each wavelength range were calculated corresponding to field-measured spectra from the closest date. By using MRSFF, wavelength ranges were classified to characterize the target's spectral features without compromising spectral profile's entirety. The analysis was substantially successful in extracting oilseed rape planting areas (RMSE ≤ 0.06), and the RMSE histogram indicated a superior result compared to a conventional SFF. Accuracy assessment was based on the mapping result compared with spectral angle mapping (SAM) and the normalized difference vegetation index (NDVI). The MRSFF yielded a robust, convincible result and, therefore, may further the use of hyperspectral imagery in precision agriculture.

  6. Wind turbine extraction from high spatial resolution remote sensing images based on saliency detection

    NASA Astrophysics Data System (ADS)

    Chen, Jingbo; Yue, Anzhi; Wang, Chengyi; Huang, Qingqing; Chen, Jiansheng; Meng, Yu; He, Dongxu

    2018-01-01

    The wind turbine is a device that converts the wind's kinetic energy into electrical power. Accurate and automatic extraction of wind turbine is instructive for government departments to plan wind power plant projects. A hybrid and practical framework based on saliency detection for wind turbine extraction, using Google Earth image at spatial resolution of 1 m, is proposed. It can be viewed as a two-phase procedure: coarsely detection and fine extraction. In the first stage, we introduced a frequency-tuned saliency detection approach for initially detecting the area of interest of the wind turbines. This method exploited features of color and luminance, was simple to implement, and was computationally efficient. Taking into account the complexity of remote sensing images, in the second stage, we proposed a fast method for fine-tuning results in frequency domain and then extracted wind turbines from these salient objects by removing the irrelevant salient areas according to the special properties of the wind turbines. Experiments demonstrated that our approach consistently obtains higher precision and better recall rates. Our method was also compared with other techniques from the literature and proves that it is more applicable and robust.

  7. Classification of Alzheimer's disease and prediction of mild cognitive impairment-to-Alzheimer's conversion from structural magnetic resource imaging using feature ranking and a genetic algorithm.

    PubMed

    Beheshti, Iman; Demirel, Hasan; Matsuda, Hiroshi

    2017-04-01

    We developed a novel computer-aided diagnosis (CAD) system that uses feature-ranking and a genetic algorithm to analyze structural magnetic resonance imaging data; using this system, we can predict conversion of mild cognitive impairment (MCI)-to-Alzheimer's disease (AD) at between one and three years before clinical diagnosis. The CAD system was developed in four stages. First, we used a voxel-based morphometry technique to investigate global and local gray matter (GM) atrophy in an AD group compared with healthy controls (HCs). Regions with significant GM volume reduction were segmented as volumes of interest (VOIs). Second, these VOIs were used to extract voxel values from the respective atrophy regions in AD, HC, stable MCI (sMCI) and progressive MCI (pMCI) patient groups. The voxel values were then extracted into a feature vector. Third, at the feature-selection stage, all features were ranked according to their respective t-test scores and a genetic algorithm designed to find the optimal feature subset. The Fisher criterion was used as part of the objective function in the genetic algorithm. Finally, the classification was carried out using a support vector machine (SVM) with 10-fold cross validation. We evaluated the proposed automatic CAD system by applying it to baseline values from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset (160 AD, 162 HC, 65 sMCI and 71 pMCI subjects). The experimental results indicated that the proposed system is capable of distinguishing between sMCI and pMCI patients, and would be appropriate for practical use in a clinical setting. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. TU-CD-BRB-10: 18F-FDG PET Image-Derived Tumor Features Highlight Altered Pathways Identified by Trancriptomic Analysis in Head and Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tixier, F; INSERM UMR1101 LaTIM, Brest; Cheze-Le-Rest, C

    2015-06-15

    Purpose: Several quantitative features can be extracted from 18F-FDG PET images, such as standardized uptake values (SUVs), metabolic tumor volume (MTV), shape characterization (SC) or intra-tumor radiotracer heterogeneity quantification (HQ). Some of these features calculated from baseline 18F-FDG PET images have shown a prognostic and predictive clinical value. It has been hypothesized that these features highlight underlying tumor patho-physiological processes at smaller scales. The objective of this study was to investigate the ability of recovering alterations of signaling pathways from FDG PET image-derived features. Methods: 52 patients were prospectively recruited from two medical centers (Brest and Poitiers). All patients underwentmore » an FDG PET scan for staging and biopsies of both healthy and primary tumor tissues. Biopsies went through a transcriptomic analysis performed in four spates on 4×44k chips (Agilent™). Primary tumors were delineated in the PET images using the Fuzzy Locally Adaptive Bayesian algorithm and characterized using 10 features including SUVs, SC and HQ. A module network algorithm followed by functional annotation was exploited in order to link PET features with signaling pathways alterations. Results: Several PET-derived features were found to discriminate differentially expressed genes between tumor and healthy tissue (fold-change >2, p<0.01) into 30 co-regulated groups (p<0.05). Functional annotations applied to these groups of genes highlighted associations with well-known pathways involved in cancer processes, such as cell proliferation and apoptosis, as well as with more specific ones such as unsaturated fatty acids. Conclusion: Quantitative features extracted from baseline 18F-FDG PET images usually exploited only for diagnosis and staging, were identified in this work as being related to specific altered pathways and may show promise as tools for personalizing treatment decisions.« less

  9. SU-F-R-24: Identifying Prognostic Imaging Biomarkers in Early Stage Lung Cancer Using Radiomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, X; Wu, J; Cui, Y

    2016-06-15

    Purpose: Patients diagnosed with early stage lung cancer have favorable outcomes when treated with surgery or stereotactic radiotherapy. However, a significant proportion (∼20%) of patients will develop metastatic disease and eventually die of the disease. The purpose of this work is to identify quantitative imaging biomarkers from CT for predicting overall survival in early stage lung cancer. Methods: In this institutional review board-approved HIPPA-compliant retrospective study, we retrospectively analyzed the diagnostic CT scans of 110 patients with early stage lung cancer. Data from 70 patients were used for training/discovery purposes, while those of remaining 40 patients were used for independentmore » validation. We extracted 191 radiomic features, including statistical, histogram, morphological, and texture features. Cox proportional hazard regression model, coupled with the least absolute shrinkage and selection operator (LASSO), was used to predict overall survival based on the radiomic features. Results: The optimal prognostic model included three image features from the Law’s feature and wavelet texture. In the discovery cohort, this model achieved a concordance index or CI=0.67, and it separated the low-risk from high-risk groups in predicting overall survival (hazard ratio=2.72, log-rank p=0.007). In the independent validation cohort, this radiomic signature achieved a CI=0.62, and significantly stratified the low-risk and high-risk groups in terms of overall survival (hazard ratio=2.20, log-rank p=0.042). Conclusion: We identified CT imaging characteristics associated with overall survival in early stage lung cancer. If prospectively validated, this could potentially help identify high-risk patients who might benefit from adjuvant systemic therapy.« less

  10. Radiomics analysis of DWI data to identify the rectal cancer patients qualified for local excision after neoadjuvant chemoradiotherapy

    NASA Astrophysics Data System (ADS)

    Tang, Zhenchao; Liu, Zhenyu; Zhang, Xiaoyan; Shi, Yanjie; Wang, Shou; Fang, Mengjie; Sun, Yingshi; Dong, Enqing; Tian, Jie

    2018-02-01

    The Locally advanced rectal cancer (LARC) patients were routinely treated with neoadjuvant chemoradiotherapy (CRT) firstly and received total excision afterwards. While, the LARC patients might relieve to T1N0M0/T0N0M0 stage after the CRT, which would enable the patients be qualified for local excision. However, accurate pathological TNM stage could only be obtained by the pathological examination after surgery. We aimed to conduct a Radiomics analysis of Diffusion weighted Imaging (DWI) data to identify the patients in T1N0M0/T0N0M0 stages before surgery, in hope of providing clinical surgery decision support. 223 routinely treated LARC patients in Beijing Cancer Hospital were enrolled in current study. DWI data and clinical characteristics were collected after CRT. According to the pathological TNM stage, the patients of T1N0M0 and T0N0M0 stages were labelled as 1 and the other patients were labelled as 0. The first 123 patients in chronological order were used as training set, and the rest patients as validation set. 563 image features extracted from the DWI data and clinical characteristics were used as features. Two-sample T test was conducted to pre-select the top 50% discriminating features. Least absolute shrinkage and selection operator (Lasso)-Logistic regression model was conducted to further select features and construct the classification model. Based on the 14 selected image features, the area under the Receiver Operating Characteristic (ROC) curve (AUC) of 0.8781, classification Accuracy (ACC) of 0.8432 were achieved in the training set. In the validation set, AUC of 0.8707, ACC (ACC) of 0.84 were observed.

  11. Distorted Character Recognition Via An Associative Neural Network

    NASA Astrophysics Data System (ADS)

    Messner, Richard A.; Szu, Harold H.

    1987-03-01

    The purpose of this paper is two-fold. First, it is intended to provide some preliminary results of a character recognition scheme which has foundations in on-going neural network architecture modeling, and secondly, to apply some of the neural network results in a real application area where thirty years of effort has had little effect on providing the machine an ability to recognize distorted objects within the same object class. It is the author's belief that the time is ripe to start applying in ernest the results of over twenty years of effort in neural modeling to some of the more difficult problems which seem so hard to solve by conventional means. The character recognition scheme proposed utilizes a preprocessing stage which performs a 2-dimensional Walsh transform of an input cartesian image field, then sequency filters this spectrum into three feature bands. Various features are then extracted and organized into three sets of feature vectors. These vector patterns that are stored and recalled associatively. Two possible associative neural memory models are proposed for further investigation. The first being an outer-product linear matrix associative memory with a threshold function controlling the strength of the output pattern (similar to Kohonen's crosscorrelation approach [1]). The second approach is based upon a modified version of Grossberg's neural architecture [2] which provides better self-organizing properties due to its adaptive nature. Preliminary results of the sequency filtering and feature extraction preprocessing stage and discussion about the use of the proposed neural architectures is included.

  12. Distinguishing prostate cancer from benign confounders via a cascaded classifier on multi-parametric MRI

    NASA Astrophysics Data System (ADS)

    Litjens, G. J. S.; Elliott, R.; Shih, N.; Feldman, M.; Barentsz, J. O.; Hulsbergen-van de Kaa, C. A.; Kovacs, I.; Huisman, H. J.; Madabhushi, A.

    2014-03-01

    Learning how to separate benign confounders from prostate cancer is important because the imaging characteristics of these confounders are poorly understood. Furthermore, the typical representations of the MRI parameters might not be enough to allow discrimination. The diagnostic uncertainty this causes leads to a lower diagnostic accuracy. In this paper a new cascaded classifier is introduced to separate prostate cancer and benign confounders on MRI in conjunction with specific computer-extracted features to distinguish each of the benign classes (benign prostatic hyperplasia (BPH), inflammation, atrophy or prostatic intra-epithelial neoplasia (PIN). In this study we tried to (1) calculate different mathematical representations of the MRI parameters which more clearly express subtle differences between different classes, (2) learn which of the MRI image features will allow to distinguish specific benign confounders from prostate cancer, and (2) find the combination of computer-extracted MRI features to best discriminate cancer from the confounding classes using a cascaded classifier. One of the most important requirements for identifying MRI signatures for adenocarcinoma, BPH, atrophy, inflammation, and PIN is accurate mapping of the location and spatial extent of the confounder and cancer categories from ex vivo histopathology to MRI. Towards this end we employed an annotated prostatectomy data set of 31 patients, all of whom underwent a multi-parametric 3 Tesla MRI prior to radical prostatectomy. The prostatectomy slides were carefully co-registered to the corresponding MRI slices using an elastic registration technique. We extracted texture features from the T2-weighted imaging, pharmacokinetic features from the dynamic contrast enhanced imaging and diffusion features from the diffusion-weighted imaging for each of the confounder classes and prostate cancer. These features were selected because they form the mainstay of clinical diagnosis. Relevant features for each of the classes were selected using maximum relevance minimum redundancy feature selection, allowing us to perform classifier independent feature selection. The selected features were then incorporated in a cascading classifier, which can focus on easier sub-tasks at each stage, leaving the more difficult classification tasks for later stages. Results show that distinct features are relevant for each of the benign classes, for example the fraction of extra-vascular, extra-cellular space in a voxel is a clear discriminator for inflammation. Furthermore, the cascaded classifier outperforms both multi-class and one-shot classifiers in overall accuracy for discriminating confounders from cancer: 0.76 versus 0.71 and 0.62.

  13. Hidden discriminative features extraction for supervised high-order time series modeling.

    PubMed

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Ambulatory REACT: real-time seizure detection with a DSP microprocessor.

    PubMed

    McEvoy, Robert P; Faul, Stephen; Marnane, William P

    2010-01-01

    REACT (Real-Time EEG Analysis for event deteCTion) is a Support Vector Machine based technology which, in recent years, has been successfully applied to the problem of automated seizure detection in both adults and neonates. This paper describes the implementation of REACT on a commercial DSP microprocessor; the Analog Devices Blackfin®. The primary aim of this work is to develop a prototype system for use in ambulatory or in-ward automated EEG analysis. Furthermore, the complexity of the various stages of the REACT algorithm on the Blackfin processor is analysed; in particular the EEG feature extraction stages. This hardware profile is used to select a reduced, platform-aware feature set, in order to evaluate the seizure classification accuracy of a lower-complexity, lower-power REACT system.

  15. Decoding of finger trajectory from ECoG using deep learning.

    PubMed

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2018-06-01

    Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.

  16. Decoding of finger trajectory from ECoG using deep learning

    NASA Astrophysics Data System (ADS)

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2018-06-01

    Objective. Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. Approach. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. Main results. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. Significance. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.

  17. Toward optimal feature and time segment selection by divergence method for EEG signals classification.

    PubMed

    Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing

    2018-06-01

    Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Automated diagnosis of rolling bearings using MRA and neural networks

    NASA Astrophysics Data System (ADS)

    Castejón, C.; Lara, O.; García-Prada, J. C.

    2010-01-01

    Any industry needs an efficient predictive plan in order to optimize the management of resources and improve the economy of the plant by reducing unnecessary costs and increasing the level of safety. A great percentage of breakdowns in productive processes are caused by bearings. They begin to deteriorate from early stages of their functional life, also called the incipient level. This manuscript develops an automated diagnosis of rolling bearings based on the analysis and classification of signature vibrations. The novelty of this work is the application of the methodology proposed for data collected from a quasi-real industrial machine, where rolling bearings support the radial and axial loads the bearings are designed for. Multiresolution analysis (MRA) is used in a first stage in order to extract the most interesting features from signals. Features will be used in a second stage as inputs of a supervised neural network (NN) for classification purposes. Experimental results carried out in a real system show the soundness of the method which detects four bearing conditions (normal, inner race fault, outer race fault and ball fault) in a very incipient stage.

  19. A novel approach for fire recognition using hybrid features and manifold learning-based classifier

    NASA Astrophysics Data System (ADS)

    Zhu, Rong; Hu, Xueying; Tang, Jiajun; Hu, Sheng

    2018-03-01

    Although image/video based fire recognition has received growing attention, an efficient and robust fire detection strategy is rarely explored. In this paper, we propose a novel approach to automatically identify the flame or smoke regions in an image. It is composed to three stages: (1) a block processing is applied to divide an image into several nonoverlapping image blocks, and these image blocks are identified as suspicious fire regions or not by using two color models and a color histogram-based similarity matching method in the HSV color space, (2) considering that compared to other information, the flame and smoke regions have significant visual characteristics, so that two kinds of image features are extracted for fire recognition, where local features are obtained based on the Scale Invariant Feature Transform (SIFT) descriptor and the Bags of Keypoints (BOK) technique, and texture features are extracted based on the Gray Level Co-occurrence Matrices (GLCM) and the Wavelet-based Analysis (WA) methods, and (3) a manifold learning-based classifier is constructed based on two image manifolds, which is designed via an improve Globular Neighborhood Locally Linear Embedding (GNLLE) algorithm, and the extracted hybrid features are used as input feature vectors to train the classifier, which is used to make decision for fire images or non fire images. Experiments and comparative analyses with four approaches are conducted on the collected image sets. The results show that the proposed approach is superior to the other ones in detecting fire and achieving a high recognition accuracy and a low error rate.

  20. CCD-Based Skinning Injury Recognition on Potato Tubers (Solanum tuberosum L.): A Comparison between Visible and Biospeckle Imaging

    PubMed Central

    Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin

    2016-01-01

    Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging. PMID:27763555

  1. CCD-Based Skinning Injury Recognition on Potato Tubers (Solanum tuberosum L.): A Comparison between Visible and Biospeckle Imaging.

    PubMed

    Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin

    2016-10-18

    Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging.

  2. A classification of marked hijaiyah letters' pronunciation using hidden Markov model

    NASA Astrophysics Data System (ADS)

    Wisesty, Untari N.; Mubarok, M. Syahrul; Adiwijaya

    2017-08-01

    Hijaiyah letters are the letters that arrange the words in Al Qur'an consisting of 28 letters. They symbolize the consonant sounds. On the other hand, the vowel sounds are symbolized by harokat/marks. Speech recognition system is a system used to process the sound signal to be data so that it can be recognized by computer. To build the system, some stages are needed i.e characteristics/feature extraction and classification. In this research, LPC and MFCC extraction method, K-Means Quantization vector and Hidden Markov Model classification are used. The data used are the 28 letters and 6 harakat with the total class of 168. After several are testing done, it can be concluded that the system can recognize the pronunciation pattern of marked hijaiyah letter very well in the training data with its highest accuracy of 96.1% using the feature of LPC extraction and 94% using the MFCC. Meanwhile, when testing system is used, the accuracy decreases up to 41%.

  3. Applications of artificial intelligence to digital photogrammetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kretsch, J.L.

    1988-01-01

    The aim of this research was to explore the application of expert systems to digital photogrammetry, specifically to photogrammetric triangulation, feature extraction, and photogrammetric problem solving. In 1987, prototype expert systems were developed for doing system startup, interior orientation, and relative orientation in the mensuration stage. The system explored means of performing diagnostics during the process. In the area of feature extraction, the relationship of metric uncertainty to symbolic uncertainty was the topic of research. Error propagation through the Dempster-Shafer formalism for representing evidence was performed in order to find the variance in the calculated belief values due to errorsmore » in measurements made together the initial evidence needed to being labeling of observed image features with features in an object model. In photogrammetric problem solving, an expert system is under continuous development which seeks to solve photogrammetric problems using mathematical reasoning. The key to the approach used is the representation of knowledge directly in the form of equations, rather than in the form of if-then rules. Then each variable in the equations is treated as a goal to be solved.« less

  4. A real time ECG signal processing application for arrhythmia detection on portable devices

    NASA Astrophysics Data System (ADS)

    Georganis, A.; Doulgeraki, N.; Asvestas, P.

    2017-11-01

    Arrhythmia describes the disorders of normal heart rate, which, depending on the case, can even be fatal for a patient with severe history of heart disease. The purpose of this work is to develop an application for heart signal visualization, processing and analysis in Android portable devices e.g. Mobile phones, tablets, etc. The application is able to retrieve the signal initially from a file and at a later stage this signal is processed and analysed within the device so that it can be classified according to the features of the arrhythmia. In the processing and analysing stage, different algorithms are included among them the Moving Average and Pan Tompkins algorithm as well as the use of wavelets, in order to extract features and characteristics. At the final stage, testing is performed by simulating our application in real-time records, using the TCP network protocol for communicating the mobile with a simulated signal source. The classification of ECG beat to be processed is performed by neural networks.

  5. Boosting brain connectome classification accuracy in Alzheimer's disease using higher-order singular value decomposition

    PubMed Central

    Zhan, Liang; Liu, Yashu; Wang, Yalin; Zhou, Jiayu; Jahanshad, Neda; Ye, Jieping; Thompson, Paul M.

    2015-01-01

    Alzheimer's disease (AD) is a progressive brain disease. Accurate detection of AD and its prodromal stage, mild cognitive impairment (MCI), are crucial. There is also a growing interest in identifying brain imaging biomarkers that help to automatically differentiate stages of Alzheimer's disease. Here, we focused on brain structural networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying different stages of Alzheimer's disease. PMID:26257601

  6. Adaptive fuzzy leader clustering of complex data sets in pattern recognition

    NASA Technical Reports Server (NTRS)

    Newton, Scott C.; Pemmaraju, Surya; Mitra, Sunanda

    1992-01-01

    A modular, unsupervised neural network architecture for clustering and classification of complex data sets is presented. The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns on-line in a stable and efficient manner. The initial classification is performed in two stages: a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from fuzzy C-means system equations for the centroids and the membership values. The AFLC algorithm is applied to the Anderson Iris data and laser-luminescent fingerprint image data. It is concluded that the AFLC algorithm successfully classifies features extracted from real data, discrete or continuous.

  7. A Knowledge-Based Approach to Automatic Detection of Equipment Alarm Sounds in a Neonatal Intensive Care Unit Environment.

    PubMed

    Raboshchuk, Ganna; Nadeu, Climent; Jancovic, Peter; Lilja, Alex Peiro; Kokuer, Munevver; Munoz Mahamud, Blanca; Riverola De Veciana, Ana

    2018-01-01

    A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%.

  8. A Knowledge-Based Approach to Automatic Detection of Equipment Alarm Sounds in a Neonatal Intensive Care Unit Environment

    PubMed Central

    Nadeu, Climent; Jančovič, Peter; Lilja, Alex Peiró; Köküer, Münevver; Muñoz Mahamud, Blanca; Riverola De Veciana, Ana

    2018-01-01

    A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%. PMID:29404227

  9. Classification by diagnosing all absorption features (CDAF) for the most abundant minerals in airborne hyperspectral images

    NASA Astrophysics Data System (ADS)

    Mobasheri, Mohammad Reza; Ghamary-Asl, Mohsen

    2011-12-01

    Imaging through hyperspectral technology is a powerful tool that can be used to spectrally identify and spatially map materials based on their specific absorption characteristics in electromagnetic spectrum. A robust method called Tetracorder has shown its effectiveness at material identification and mapping, using a set of algorithms within an expert system decision-making framework. In this study, using some stages of Tetracorder, a technique called classification by diagnosing all absorption features (CDAF) is introduced. This technique enables one to assign a class to the most abundant mineral in each pixel with high accuracy. The technique is based on the derivation of information from reflectance spectra of the image. This can be done through extraction of spectral absorption features of any minerals from their respected laboratory-measured reflectance spectra, and comparing it with those extracted from the pixels in the image. The CDAF technique has been executed on the AVIRIS image where the results show an overall accuracy of better than 96%.

  10. Clinical state assessment in bipolar patients by means of HRV features obtained with a sensorized T-shirt.

    PubMed

    Mariani, Sara; Migliorini, Matteo; Tacchino, Giulia; Gentili, Claudio; Bertschy, Gilles; Werner, Sandra; Bianchi, Anna M

    2012-01-01

    The aim of this study is to identify parameters extracted from the Heart Rate Variability (HRV) signal that correlate to the clinical state in patients affected by bipolar disorder. 25 ECG and activity recordings from 12 patients were obtained by means of a sensorized T-shirt and the clinical state of the subjects was assessed by a psychiatrist. Features in the time and frequency domain were extracted from each signal. HRV features were also used to automatically compute the sleep profile of each subject by means of an Artificial Neural Network, trained on a control group of healthy subjects. From the hypnograms, sleep-specific parameters were computed. All the parameters were compared with those computed on the control group, in order to highlight significant differences in their values during different stages of the pathology. The analysis was performed by grouping the subjects first on the basis of the depression-mania level and then on the basis of the anxiety level.

  11. Texture Classification by Texton: Statistical versus Binary

    PubMed Central

    Guo, Zhenhua; Zhang, Zhongcheng; Li, Xiu; Li, Qin; You, Jane

    2014-01-01

    Using statistical textons for texture classification has shown great success recently. The maximal response 8 (Statistical_MR8), image patch (Statistical_Joint) and locally invariant fractal (Statistical_Fractal) are typical statistical texton algorithms and state-of-the-art texture classification methods. However, there are two limitations when using these methods. First, it needs a training stage to build a texton library, thus the recognition accuracy will be highly depended on the training samples; second, during feature extraction, local feature is assigned to a texton by searching for the nearest texton in the whole library, which is time consuming when the library size is big and the dimension of feature is high. To address the above two issues, in this paper, three binary texton counterpart methods were proposed, Binary_MR8, Binary_Joint, and Binary_Fractal. These methods do not require any training step but encode local feature into binary representation directly. The experimental results on the CUReT, UIUC and KTH-TIPS databases show that binary texton could get sound results with fast feature extraction, especially when the image size is not big and the quality of image is not poor. PMID:24520346

  12. Using Saliency-Weighted Disparity Statistics for Objective Visual Comfort Assessment of Stereoscopic Images

    NASA Astrophysics Data System (ADS)

    Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing

    2016-06-01

    Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.

  13. Quantification of CT images for the classification of high- and low-risk pancreatic cysts

    NASA Astrophysics Data System (ADS)

    Gazit, Lior; Chakraborty, Jayasree; Attiyeh, Marc; Langdon-Embry, Liana; Allen, Peter J.; Do, Richard K. G.; Simpson, Amber L.

    2017-03-01

    Pancreatic cancer is the most lethal cancer with an overall 5-year survival rate of 7%1 due to the late stage at diagnosis and the ineffectiveness of current therapeutic strategies. Given the poor prognosis, early detection at a pre-cancerous stage is the best tool for preventing this disease. Intraductal papillary mucinous neoplasms (IPMN), cystic tumors of the pancreas, represent the only radiographically identifiable precursor lesion of pancreatic cancer and are known to evolve stepwise from low-to-high-grade dysplasia before progressing into an invasive carcinoma. Observation is usually recommended for low-risk (low- and intermediate-grade dysplasia) patients, while high-risk (high-grade dysplasia and invasive carcinoma) patients undergo resection; hence, patient selection is critically important in the management of pancreatic cysts.2 Radiologists use standard criteria such as main pancreatic duct size, cyst size, or presence of a solid enhancing component in the cyst to optimally select patients for surgery.3 However, these findings are subject to a radiologist's interpretation and have been shown to be inconsistent with regards to the presence of a mural nodule or solid component.4 We propose objective classification of risk groups based on quantitative imaging features extracted from CT scans. We apply new features that represent the solid component (i.e. areas of high intensity) within the cyst and extract standard texture features. An adaptive boost classifier5 achieves the best performance with area under receiver operating characteristic curve (AUC) of 0.73 and accuracy of 77.3% for texture features. The random forest classifier achieves the best performance with AUC of 0.71 and accuracy of 70.8% with the solid component features.

  14. Tropical Timber Identification using Backpropagation Neural Network

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Andayani, U.; Fatihah, N.; Hakim, L.; Fahmi, F.

    2017-01-01

    Each and every type of wood has different characteristics. Identifying the type of wood properly is important, especially for industries that need to know the type of timber specifically. However, it requires expertise in identifying the type of wood and only limited experts available. In addition, the manual identification even by experts is rather inefficient because it requires a lot of time and possibility of human errors. To overcome these problems, a digital image based method to identify the type of timber automatically is needed. In this study, backpropagation neural network is used as artificial intelligence component. Several stages were developed: a microscope image acquisition, pre-processing, feature extraction using gray level co-occurrence matrix and normalization of data extraction using decimal scaling features. The results showed that the proposed method was able to identify the timber with an accuracy of 94%.

  15. Extraction and Classification of Human Gait Features

    NASA Astrophysics Data System (ADS)

    Ng, Hu; Tan, Wooi-Haw; Tong, Hau-Lee; Abdullah, Junaidi; Komiya, Ryoichi

    In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette images. The approach consists of six stages: clearing the background noise of image by morphological opening; measuring of the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; applying Hough transform to obtain the joint angles from the body segment skeletons; and measuring the distance between the bottom of right leg and left leg from the body segment skeletons. The angles of joints, step-size together with the height and width of the human silhouette are collected and used for gait analysis. The experimental results have demonstrated that the proposed system is feasible and achieved satisfactory results.

  16. Chemical-induced disease relation extraction with various linguistic features.

    PubMed

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask. © The Author(s) 2016. Published by Oxford University Press.

  17. Bearing performance degradation assessment based on a combination of empirical mode decomposition and k-medoids clustering

    NASA Astrophysics Data System (ADS)

    Rai, Akhand; Upadhyay, S. H.

    2017-09-01

    Bearing is the most critical component in rotating machinery since it is more susceptible to failure. The monitoring of degradation in bearings becomes of great concern for averting the sudden machinery breakdown. In this study, a novel method for bearing performance degradation assessment (PDA) based on an amalgamation of empirical mode decomposition (EMD) and k-medoids clustering is encouraged. The fault features are extracted from the bearing signals using the EMD process. The extracted features are then subjected to k-medoids based clustering for obtaining the normal state and failure state cluster centres. A confidence value (CV) curve based on dissimilarity of the test data object to the normal state is obtained and employed as the degradation indicator for assessing the health of bearings. The proposed outlook is applied on the vibration signals collected in run-to-failure tests of bearings to assess its effectiveness in bearing PDA. To validate the superiority of the suggested approach, it is compared with commonly used time-domain features RMS and kurtosis, well-known fault diagnosis method envelope analysis (EA) and existing PDA classifiers i.e. self-organizing maps (SOM) and Fuzzy c-means (FCM). The results demonstrate that the recommended method outperforms the time-domain features, SOM and FCM based PDA in detecting the early stage degradation more precisely. Moreover, EA can be used as an accompanying method to confirm the early stage defect detected by the proposed bearing PDA approach. The study shows the potential application of k-medoids clustering as an effective tool for PDA of bearings.

  18. Assessment of Homomorphic Analysis for Human Activity Recognition from Acceleration Signals.

    PubMed

    Vanrell, Sebastian Rodrigo; Milone, Diego Humberto; Rufiner, Hugo Leonardo

    2017-07-03

    Unobtrusive activity monitoring can provide valuable information for medical and sports applications. In recent years, human activity recognition has moved to wearable sensors to deal with unconstrained scenarios. Accelerometers are the preferred sensors due to their simplicity and availability. Previous studies have examined several \\azul{classic} techniques for extracting features from acceleration signals, including time-domain, time-frequency, frequency-domain, and other heuristic features. Spectral and temporal features are the preferred ones and they are generally computed from acceleration components, leaving the acceleration magnitude potential unexplored. In this study, based on homomorphic analysis, a new type of feature extraction stage is proposed in order to exploit discriminative activity information present in acceleration signals. Homomorphic analysis can isolate the information about whole body dynamics and translate it into a compact representation, called cepstral coefficients. Experiments have explored several configurations of the proposed features, including size of representation, signals to be used, and fusion with other features. Cepstral features computed from acceleration magnitude obtained one of the highest recognition rates. In addition, a beneficial contribution was found when time-domain and moving pace information was included in the feature vector. Overall, the proposed system achieved a recognition rate of 91.21% on the publicly available SCUT-NAA dataset. To the best of our knowledge, this is the highest recognition rate on this dataset.

  19. Computer extracted texture features on T2w MRI to predict biochemical recurrence following radiation therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Ginsburg, Shoshana B.; Rusu, Mirabela; Kurhanewicz, John; Madabhushi, Anant

    2014-03-01

    In this study we explore the ability of a novel machine learning approach, in conjunction with computer-extracted features describing prostate cancer morphology on pre-treatment MRI, to predict whether a patient will develop biochemical recurrence within ten years of radiation therapy. Biochemical recurrence, which is characterized by a rise in serum prostate-specific antigen (PSA) of at least 2 ng/mL above the nadir PSA, is associated with increased risk of metastasis and prostate cancer-related mortality. Currently, risk of biochemical recurrence is predicted by the Kattan nomogram, which incorporates several clinical factors to predict the probability of recurrence-free survival following radiation therapy (but has limited prediction accuracy). Semantic attributes on T2w MRI, such as the presence of extracapsular extension and seminal vesicle invasion and surrogate measure- ments of tumor size, have also been shown to be predictive of biochemical recurrence risk. While the correlation between biochemical recurrence and factors like tumor stage, Gleason grade, and extracapsular spread are well- documented, it is less clear how to predict biochemical recurrence in the absence of extracapsular spread and for small tumors fully contained in the capsule. Computer{extracted texture features, which quantitatively de- scribe tumor micro-architecture and morphology on MRI, have been shown to provide clues about a tumor's aggressiveness. However, while computer{extracted features have been employed for predicting cancer presence and grade, they have not been evaluated in the context of predicting risk of biochemical recurrence. This work seeks to evaluate the role of computer-extracted texture features in predicting risk of biochemical recurrence on a cohort of sixteen patients who underwent pre{treatment 1.5 Tesla (T) T2w MRI. We extract a combination of first-order statistical, gradient, co-occurrence, and Gabor wavelet features from T2w MRI. To identify which of these T2w MRI texture features are potential independent prognostic markers of PSA failure, we implement a partial least squares (PLS) method to embed the data in a low{dimensional space and then use the variable importance in projections (VIP) method to quantify the contributions of individual features to classification on the PLS embedding. In spite of the poor resolution of the 1.5 T MRI data, we are able to identify three Gabor wavelet features that, in conjunction with a logistic regression classifier, yield an area under the receiver operating characteristic curve of 0.83 for predicting the probability of biochemical recurrence following radiation therapy. In comparison to both the Kattan nomogram and semantic MRI attributes, the ability of these three computer-extracted features to predict biochemical recurrence risk is demonstrated.

  20. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    NASA Astrophysics Data System (ADS)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-02-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  1. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    NASA Astrophysics Data System (ADS)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-06-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  2. Automatic exudate detection by fusing multiple active contours and regionwise classification.

    PubMed

    Harangi, Balazs; Hajdu, Andras

    2014-11-01

    In this paper, we propose a method for the automatic detection of exudates in digital fundus images. Our approach can be divided into three stages: candidate extraction, precise contour segmentation and the labeling of candidates as true or false exudates. For candidate detection, we borrow a grayscale morphology-based method to identify possible regions containing these bright lesions. Then, to extract the precise boundary of the candidates, we introduce a complex active contour-based method. Namely, to increase the accuracy of segmentation, we extract additional possible contours by taking advantage of the diverse behavior of different pre-processing methods. After selecting an appropriate combination of the extracted contours, a region-wise classifier is applied to remove the false exudate candidates. For this task, we consider several region-based features, and extract an appropriate feature subset to train a Naïve-Bayes classifier optimized further by an adaptive boosting technique. Regarding experimental studies, the method was tested on publicly available databases both to measure the accuracy of the segmentation of exudate regions and to recognize their presence at image-level. In a proper quantitative evaluation on publicly available datasets the proposed approach outperformed several state-of-the-art exudate detector algorithms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Diabetic retinopathy grading by digital curvelet transform.

    PubMed

    Hajeb Mohammad Alipour, Shirin; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2012-01-01

    One of the major complications of diabetes is diabetic retinopathy. As manual analysis and diagnosis of large amount of images are time consuming, automatic detection and grading of diabetic retinopathy are desired. In this paper, we use fundus fluorescein angiography and color fundus images simultaneously, extract 6 features employing curvelet transform, and feed them to support vector machine in order to determine diabetic retinopathy severity stages. These features are area of blood vessels, area, regularity of foveal avascular zone, and the number of micro-aneurisms therein, total number of micro-aneurisms, and area of exudates. In order to extract exudates and vessels, we respectively modify curvelet coefficients of color fundus images and angiograms. The end points of extracted vessels in predefined region of interest based on optic disk are connected together to segment foveal avascular zone region. To extract micro-aneurisms from angiogram, first extracted vessels are subtracted from original image, and after removing detected background by morphological operators and enhancing bright small pixels, micro-aneurisms are detected. 70 patients were involved in this study to classify diabetic retinopathy into 3 groups, that is, (1) no diabetic retinopathy, (2) mild/moderate nonproliferative diabetic retinopathy, (3) severe nonproliferative/proliferative diabetic retinopathy, and our simulations show that the proposed system has sensitivity and specificity of 100% for grading.

  4. Automatic detection of multi-level acetowhite regions in RGB color images of the uterine cervix

    NASA Astrophysics Data System (ADS)

    Lange, Holger

    2005-04-01

    Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method used to detect cancer precursors and cancer of the uterine cervix, whereby a physician (colposcopist) visually inspects the metaplastic epithelium on the cervix for certain distinctly abnormal morphologic features. A contrast agent, a 3-5% acetic acid solution, is used, causing abnormal and metaplastic epithelia to turn white. The colposcopist considers diagnostic features such as the acetowhite, blood vessel structure, and lesion margin to derive a clinical diagnosis. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD, a complex image analysis system that at its core assesses the same visual features as used by colposcopists. The acetowhite feature has been identified as one of the most important individual predictors of lesion severity. Here, we present the details and preliminary results of a multi-level acetowhite region detection algorithm for RGB color images of the cervix, including the detection of the anatomic features: cervix, os and columnar region, which are used for the acetowhite region detection. The RGB images are assumed to be glare free, either obtained by cross-polarized image acquisition or glare removal pre-processing. The basic approach of the algorithm is to extract a feature image from the RGB image that provides a good acetowhite to cervix background ratio, to segment the feature image using novel pixel grouping and multi-stage region-growing algorithms that provide region segmentations with different levels of detail, to extract the acetowhite regions from the region segmentations using a novel region selection algorithm, and then finally to extract the multi-levels from the acetowhite regions using multiple thresholds. The performance of the algorithm is demonstrated using human subject data.

  5. Research of seafloor topographic analyses for a staged mineral exploration

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Kadoshima, K.; Koizumi, Y.; Yamakawa, T.; Asakawa, E.; Sumi, T.; Kose, M.

    2016-12-01

    J-MARES (Research and Development Partnership for Next Generation Technology of Marine Resources Survey, JAPAN) has been designing a low-cost and high-efficiency exploration system for seafloor hydrothermal massive sulfide (SMS) deposits in "Cross-ministerial Strategic Innovation Promotion Program (SIP)" granted by the Cabinet Office, Government of Japan since 2014. We proposed the multi-stage approach, which is designed from the regional scaled to the detail scaled survey stages through semi-detail scaled, focusing a prospective area by seafloor topographic analyses. We applied this method to the area of more than 100km x 100km around Okinawa Trough, including some well-known mineralized deposits. In the regional scale survey, we assume survey areas are more than 100 km x 100km. Then the spatial resolution of topography data should be bigger than 100m. The 500 m resolution data which is interpolated into 250 m resolution was used for extracting depression and performing principal component analysis (PCA) by the wavelength obtained from frequency analysis. As the result, we have successfully extracted the areas having the topographic features quite similar to well-known mineralized deposits. In the semi-local survey stage, we use the topography data obtained by bathymetric survey using multi-narrow beam echo-sounder. The 30m-resolution data was used for extracting depression, relative-large mounds, hills, lineaments as fault, and also for performing frequency analysis. As the result, wavelength as principal component constituting in the target area was extracted by PCA of wavelength obtained from frequency analysis. Therefore, color image was composited by using the second principal component (PC2) to the forth principal component (PC4) in which the continuity of specific wavelength was observed, and consistent with extracted lineaments. In addition, well-known mineralized deposits were discriminated in the same clusters by using clustering from PC2 to PC4.We applied the results described above to a new area, and successfully extract the quite similar area in vicinity to one of the well-known mineralized deposits. So we are going to verify the extracted areas by using geophysical methods, such as vertical cable seismic and time-domain EM survey, developed in this SIP project.

  6. Can home-monitoring of sleep predict depressive episodes in bipolar patients?

    PubMed

    Migliorini, M; Mariani, S; Bertschy, G; Kosel, M; Bianchi, A M

    2015-08-01

    The aim of this study is the evaluation of the autonomic regulations during depressive stages in bipolar patients in order to test new quantitative and objective measures to detect such events. A sensorized T-shirt was used to record ECG signal and body movements during the night, from which HRV data and sleep macrostructure were estimated and analyzed. 9 out of 20 features extracted resulted to be significant (p<;0.05) in discriminating among depressive and non-depressive states. Such features are representation of HRV dynamics in both linear and non-linear domain and parameters linked to sleep modulations.

  7. Categorization for Faces and Tools—Two Classes of Objects Shaped by Different Experience—Differs in Processing Timing, Brain Areas Involved, and Repetition Effects

    PubMed Central

    Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A.

    2018-01-01

    The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se, or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140–170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210–220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning. PMID:29379426

  8. Categorization for Faces and Tools-Two Classes of Objects Shaped by Different Experience-Differs in Processing Timing, Brain Areas Involved, and Repetition Effects.

    PubMed

    Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A

    2017-01-01

    The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se , or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140-170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210-220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning.

  9. An integrated multi-sensor fusion-based deep feature learning approach for rotating machinery diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Hu, Youmin; Wang, Yan; Wu, Bo; Fan, Jikai; Hu, Zhongxu

    2018-05-01

    The diagnosis of complicated fault severity problems in rotating machinery systems is an important issue that affects the productivity and quality of manufacturing processes and industrial applications. However, it usually suffers from several deficiencies. (1) A considerable degree of prior knowledge and expertise is required to not only extract and select specific features from raw sensor signals, and but also choose a suitable fusion for sensor information. (2) Traditional artificial neural networks with shallow architectures are usually adopted and they have a limited ability to learn the complex and variable operating conditions. In multi-sensor-based diagnosis applications in particular, massive high-dimensional and high-volume raw sensor signals need to be processed. In this paper, an integrated multi-sensor fusion-based deep feature learning (IMSFDFL) approach is developed to identify the fault severity in rotating machinery processes. First, traditional statistics and energy spectrum features are extracted from multiple sensors with multiple channels and combined. Then, a fused feature vector is constructed from all of the acquisition channels. Further, deep feature learning with stacked auto-encoders is used to obtain the deep features. Finally, the traditional softmax model is applied to identify the fault severity. The effectiveness of the proposed IMSFDFL approach is primarily verified by a one-stage gearbox experimental platform that uses several accelerometers under different operating conditions. This approach can identify fault severity more effectively than the traditional approaches.

  10. Neonatal Jaundice Detection System.

    PubMed

    Aydın, Mustafa; Hardalaç, Fırat; Ural, Berkan; Karap, Serhat

    2016-07-01

    Neonatal jaundice is a common condition that occurs in newborn infants in the first week of life. Today, techniques used for detection are required blood samples and other clinical testing with special equipment. The aim of this study is creating a non-invasive system to control and to detect the jaundice periodically and helping doctors for early diagnosis. In this work, first, a patient group which is consisted from jaundiced babies and a control group which is consisted from healthy babies are prepared, then between 24 and 48 h after birth, 40 jaundiced and 40 healthy newborns are chosen. Second, advanced image processing techniques are used on the images which are taken with a standard smartphone and the color calibration card. Segmentation, pixel similarity and white balancing methods are used as image processing techniques and RGB values and pixels' important information are obtained exactly. Third, during feature extraction stage, with using colormap transformations and feature calculation, comparisons are done in RGB plane between color change values and the 8-color calibration card which is specially designed. Finally, in the bilirubin level estimation stage, kNN and SVR machine learning regressions are used on the dataset which are obtained from feature extraction. At the end of the process, when the control group is based on for comparisons, jaundice is succesfully detected for 40 jaundiced infants and the success rate is 85 %. Obtained bilirubin estimation results are consisted with bilirubin results which are obtained from the standard blood test and the compliance rate is 85 %.

  11. Optimization of Adaboost Algorithm for Sonar Target Detection in a Multi-Stage ATR System

    NASA Technical Reports Server (NTRS)

    Lin, Tsung Han (Hank)

    2011-01-01

    JPL has developed a multi-stage Automated Target Recognition (ATR) system to locate objects in images. First, input images are preprocessed and sent to a Grayscale Optical Correlator (GOC) filter to identify possible regions-of-interest (ROIs). Second, feature extraction operations are performed using Texton filters and Principal Component Analysis (PCA). Finally, the features are fed to a classifier, to identify ROIs that contain the targets. Previous work used the Feed-forward Back-propagation Neural Network for classification. In this project we investigate a version of Adaboost as a classifier for comparison. The version we used is known as GentleBoost. We used the boosted decision tree as the weak classifier. We have tested our ATR system against real-world sonar images using the Adaboost approach. Results indicate an improvement in performance over a single Neural Network design.

  12. Extraction of features from medical images using a modular neural network approach that relies on learning by sample

    NASA Astrophysics Data System (ADS)

    Brahmi, Djamel; Serruys, Camille; Cassoux, Nathalie; Giron, Alain; Triller, Raoul; Lehoang, Phuc; Fertil, Bernard

    2000-06-01

    Medical images provide experienced physicians with meaningful visual stimuli but their features are frequently hard to decipher. The development of a computational model to mimic physicians' expertise is a demanding task, especially if a significant and sophisticated preprocessing of images is required. Learning from well-expertised images may be a more convenient approach, inasmuch a large and representative bunch of samples is available. A four-stage approach has been designed, which combines image sub-sampling with unsupervised image coding, supervised classification and image reconstruction in order to directly extract medical expertise from raw images. The system has been applied (1) to the detection of some features related to the diagnosis of black tumors of skin (a classification issue) and (2) to the detection of virus-infected and healthy areas in retina angiography in order to locate precisely the border between them and characterize the evolution of infection. For reasonably balanced training sets, we are able to obtained about 90% correct classification of features (black tumors). Boundaries generated by our system mimic reproducibility of hand-outlines drawn by experts (segmentation of virus-infected area).

  13. Vessel Classification in Cosmo-Skymed SAR Data Using Hierarchical Feature Selection

    NASA Astrophysics Data System (ADS)

    Makedonas, A.; Theoharatos, C.; Tsagaris, V.; Anastasopoulos, V.; Costicoglou, S.

    2015-04-01

    SAR based ship detection and classification are important elements of maritime monitoring applications. Recently, high-resolution SAR data have opened new possibilities to researchers for achieving improved classification results. In this work, a hierarchical vessel classification procedure is presented based on a robust feature extraction and selection scheme that utilizes scale, shape and texture features in a hierarchical way. Initially, different types of feature extraction algorithms are implemented in order to form the utilized feature pool, able to represent the structure, material, orientation and other vessel type characteristics. A two-stage hierarchical feature selection algorithm is utilized next in order to be able to discriminate effectively civilian vessels into three distinct types, in COSMO-SkyMed SAR images: cargos, small ships and tankers. In our analysis, scale and shape features are utilized in order to discriminate smaller types of vessels present in the available SAR data, or shape specific vessels. Then, the most informative texture and intensity features are incorporated in order to be able to better distinguish the civilian types with high accuracy. A feature selection procedure that utilizes heuristic measures based on features' statistical characteristics, followed by an exhaustive research with feature sets formed by the most qualified features is carried out, in order to discriminate the most appropriate combination of features for the final classification. In our analysis, five COSMO-SkyMed SAR data with 2.2m x 2.2m resolution were used to analyse the detailed characteristics of these types of ships. A total of 111 ships with available AIS data were used in the classification process. The experimental results show that this method has good performance in ship classification, with an overall accuracy reaching 83%. Further investigation of additional features and proper feature selection is currently in progress.

  14. Cardiac arrhythmia beat classification using DOST and PSO tuned SVM.

    PubMed

    Raj, Sandeep; Ray, Kailash Chandra; Shankar, Om

    2016-11-01

    The increase in the number of deaths due to cardiovascular diseases (CVDs) has gained significant attention from the study of electrocardiogram (ECG) signals. These ECG signals are studied by the experienced cardiologist for accurate and proper diagnosis, but it becomes difficult and time-consuming for long-term recordings. Various signal processing techniques are studied to analyze the ECG signal, but they bear limitations due to the non-stationary behavior of ECG signals. Hence, this study aims to improve the classification accuracy rate and provide an automated diagnostic solution for the detection of cardiac arrhythmias. The proposed methodology consists of four stages, i.e. filtering, R-peak detection, feature extraction and classification stages. In this study, Wavelet based approach is used to filter the raw ECG signal, whereas Pan-Tompkins algorithm is used for detecting the R-peak inside the ECG signal. In the feature extraction stage, discrete orthogonal Stockwell transform (DOST) approach is presented for an efficient time-frequency representation (i.e. morphological descriptors) of a time domain signal and retains the absolute phase information to distinguish the various non-stationary behavior ECG signals. Moreover, these morphological descriptors are further reduced in lower dimensional space by using principal component analysis and combined with the dynamic features (i.e based on RR-interval of the ECG signals) of the input signal. This combination of two different kinds of descriptors represents each feature set of an input signal that is utilized for classification into subsequent categories by employing PSO tuned support vector machines (SVM). The proposed methodology is validated on the baseline MIT-BIH arrhythmia database and evaluated under two assessment schemes, yielding an improved overall accuracy of 99.18% for sixteen classes in the category-based and 89.10% for five classes (mapped according to AAMI standard) in the patient-based assessment scheme respectively to the state-of-art diagnosis. The results reported are further compared to the existing methodologies in literature. The proposed feature representation of cardiac signals based on symmetrical features along with PSO based optimization technique for the SVM classifier reported an improved classification accuracy in both the assessment schemes evaluated on the benchmark MIT-BIH arrhythmia database and hence can be utilized for automated computer-aided diagnosis of cardiac arrhythmia beats. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Gross feature recognition of Anatomical Images based on Atlas grid (GAIA): Incorporating the local discrepancy between an atlas and a target image to capture the features of anatomic brain MRI.

    PubMed

    Qin, Yuan-Yuan; Hsu, Johnny T; Yoshida, Shoko; Faria, Andreia V; Oishi, Kumiko; Unschuld, Paul G; Redgrave, Graham W; Ying, Sarah H; Ross, Christopher A; van Zijl, Peter C M; Hillis, Argye E; Albert, Marilyn S; Lyketsos, Constantine G; Miller, Michael I; Mori, Susumu; Oishi, Kenichi

    2013-01-01

    We aimed to develop a new method to convert T1-weighted brain MRIs to feature vectors, which could be used for content-based image retrieval (CBIR). To overcome the wide range of anatomical variability in clinical cases and the inconsistency of imaging protocols, we introduced the Gross feature recognition of Anatomical Images based on Atlas grid (GAIA), in which the local intensity alteration, caused by pathological (e.g., ischemia) or physiological (development and aging) intensity changes, as well as by atlas-image misregistration, is used to capture the anatomical features of target images. As a proof-of-concept, the GAIA was applied for pattern recognition of the neuroanatomical features of multiple stages of Alzheimer's disease, Huntington's disease, spinocerebellar ataxia type 6, and four subtypes of primary progressive aphasia. For each of these diseases, feature vectors based on a training dataset were applied to a test dataset to evaluate the accuracy of pattern recognition. The feature vectors extracted from the training dataset agreed well with the known pathological hallmarks of the selected neurodegenerative diseases. Overall, discriminant scores of the test images accurately categorized these test images to the correct disease categories. Images without typical disease-related anatomical features were misclassified. The proposed method is a promising method for image feature extraction based on disease-related anatomical features, which should enable users to submit a patient image and search past clinical cases with similar anatomical phenotypes.

  16. [Advances in studies on multi-stage countercurrent extraction technology in traditional Chinese medicine].

    PubMed

    Xie, Zhi-Peng; Liu, Xue-Song; Chen, Yong; Cai, Ming; Qu, Hai-Bin; Cheng, Yi-Yu

    2007-05-01

    Multi-stage countercurrent extraction technology, integrating solvent extraction, repercolation with dynamic and countercurrent extraction, is a novel extraction technology for the traditional Chinese medicine. This solvent-saving, energy-saving and high-extraction-efficiency technology can at the most drive active compounds to diffuse from the herbal materials into the solvent stage by stage by creating concentration differences between the herbal materials and the solvents. This paper reviewed the basic principle, the influence factors and the research progress and trends of the equipments and the application of the multi-stage countercurrent extraction.

  17. Dimensionality-varied deep convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Qu, Haicheng; Liang, Xuejian; Liang, Shichao; Liu, Wanjun

    2018-01-01

    Many methods of hyperspectral image classification have been proposed recently, and the convolutional neural network (CNN) achieves outstanding performance. However, spectral-spatial classification of CNN requires an excessively large model, tremendous computations, and complex network, and CNN is generally unable to use the noisy bands caused by water-vapor absorption. A dimensionality-varied CNN (DV-CNN) is proposed to address these issues. There are four stages in DV-CNN and the dimensionalities of spectral-spatial feature maps vary with the stages. DV-CNN can reduce the computation and simplify the structure of the network. All feature maps are processed by more kernels in higher stages to extract more precise features. DV-CNN also improves the classification accuracy and enhances the robustness to water-vapor absorption bands. The experiments are performed on data sets of Indian Pines and Pavia University scene. The classification performance of DV-CNN is compared with state-of-the-art methods, which contain the variations of CNN, traditional, and other deep learning methods. The experiment of performance analysis about DV-CNN itself is also carried out. The experimental results demonstrate that DV-CNN outperforms state-of-the-art methods for spectral-spatial classification and it is also robust to water-vapor absorption bands. Moreover, reasonable parameters selection is effective to improve classification accuracy.

  18. Comprehensive Computational Pathological Image Analysis Predicts Lung Cancer Prognosis.

    PubMed

    Luo, Xin; Zang, Xiao; Yang, Lin; Huang, Junzhou; Liang, Faming; Rodriguez-Canales, Jaime; Wistuba, Ignacio I; Gazdar, Adi; Xie, Yang; Xiao, Guanghua

    2017-03-01

    Pathological examination of histopathological slides is a routine clinical procedure for lung cancer diagnosis and prognosis. Although the classification of lung cancer has been updated to become more specific, only a small subset of the total morphological features are taken into consideration. The vast majority of the detailed morphological features of tumor tissues, particularly tumor cells' surrounding microenvironment, are not fully analyzed. The heterogeneity of tumor cells and close interactions between tumor cells and their microenvironments are closely related to tumor development and progression. The goal of this study is to develop morphological feature-based prediction models for the prognosis of patients with lung cancer. We developed objective and quantitative computational approaches to analyze the morphological features of pathological images for patients with NSCLC. Tissue pathological images were analyzed for 523 patients with adenocarcinoma (ADC) and 511 patients with squamous cell carcinoma (SCC) from The Cancer Genome Atlas lung cancer cohorts. The features extracted from the pathological images were used to develop statistical models that predict patients' survival outcomes in ADC and SCC, respectively. We extracted 943 morphological features from pathological images of hematoxylin and eosin-stained tissue and identified morphological features that are significantly associated with prognosis in ADC and SCC, respectively. Statistical models based on these extracted features stratified NSCLC patients into high-risk and low-risk groups. The models were developed from training sets and validated in independent testing sets: a predicted high-risk group versus a predicted low-risk group (for patients with ADC: hazard ratio = 2.34, 95% confidence interval: 1.12-4.91, p = 0.024; for patients with SCC: hazard ratio = 2.22, 95% confidence interval: 1.15-4.27, p = 0.017) after adjustment for age, sex, smoking status, and pathologic tumor stage. The results suggest that the quantitative morphological features of tumor pathological images predict prognosis in patients with lung cancer. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  19. FEX: A Knowledge-Based System For Planimetric Feature Extraction

    NASA Astrophysics Data System (ADS)

    Zelek, John S.

    1988-10-01

    Topographical planimetric features include natural surfaces (rivers, lakes) and man-made surfaces (roads, railways, bridges). In conventional planimetric feature extraction, a photointerpreter manually interprets and extracts features from imagery on a stereoplotter. Visual planimetric feature extraction is a very labour intensive operation. The advantages of automating feature extraction include: time and labour savings; accuracy improvements; and planimetric data consistency. FEX (Feature EXtraction) combines techniques from image processing, remote sensing and artificial intelligence for automatic feature extraction. The feature extraction process co-ordinates the information and knowledge in a hierarchical data structure. The system simulates the reasoning of a photointerpreter in determining the planimetric features. Present efforts have concentrated on the extraction of road-like features in SPOT imagery. Keywords: Remote Sensing, Artificial Intelligence (AI), SPOT, image understanding, knowledge base, apars.

  20. Area Determination of Diabetic Foot Ulcer Images Using a Cascaded Two-Stage SVM-Based Classification.

    PubMed

    Wang, Lei; Pedersen, Peder C; Agu, Emmanuel; Strong, Diane M; Tulu, Bengisu

    2017-09-01

    The standard chronic wound assessment method based on visual examination is potentially inaccurate and also represents a significant clinical workload. Hence, computer-based systems providing quantitative wound assessment may be valuable for accurately monitoring wound healing status, with the wound area the best suited for automated analysis. Here, we present a novel approach, using support vector machines (SVM) to determine the wound boundaries on foot ulcer images captured with an image capture box, which provides controlled lighting and range. After superpixel segmentation, a cascaded two-stage classifier operates as follows: in the first stage, a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from superpixels that are used as input for each stage in the classifier training. Specifically, color and bag-of-word representations of local dense scale invariant feature transformation features are descriptors for ruling out irrelevant regions, and color and wavelet-based features are descriptors for distinguishing healthy tissue from wound regions. Finally, the detected wound boundary is refined by applying the conditional random field method. We have implemented the wound classification on a Nexus 5 smartphone platform, except for training which was done offline. Results are compared with other classifiers and show that our approach provides high global performance rates (average sensitivity = 73.3%, specificity = 94.6%) and is sufficiently efficient for a smartphone-based image analysis.

  1. Classification of tumor based on magnetic resonance (MR) brain images using wavelet energy feature and neuro-fuzzy model

    NASA Astrophysics Data System (ADS)

    Damayanti, A.; Werdiningsih, I.

    2018-03-01

    The brain is the organ that coordinates all the activities that occur in our bodies. Small abnormalities in the brain will affect body activity. Tumor of the brain is a mass formed a result of cell growth not normal and unbridled in the brain. MRI is a non-invasive medical test that is useful for doctors in diagnosing and treating medical conditions. The process of classification of brain tumor can provide the right decision and correct treatment and right on the process of treatment of brain tumor. In this study, the classification process performed to determine the type of brain tumor disease, namely Alzheimer’s, Glioma, Carcinoma and normal, using energy coefficient and ANFIS. Process stages in the classification of images of MR brain are the extraction of a feature, reduction of a feature, and process of classification. The result of feature extraction is a vector approximation of each wavelet decomposition level. The feature reduction is a process of reducing the feature by using the energy coefficients of the vector approximation. The feature reduction result for energy coefficient of 100 per feature is 1 x 52 pixels. This vector will be the input on the classification using ANFIS with Fuzzy C-Means and FLVQ clustering process and LM back-propagation. Percentage of success rate of MR brain images recognition using ANFIS-FLVQ, ANFIS, and LM back-propagation was obtained at 100%.

  2. Automatic epileptic seizure detection in EEGs using MF-DFA, SVM based on cloud computing.

    PubMed

    Zhang, Zhongnan; Wen, Tingxi; Huang, Wei; Wang, Meihong; Li, Chunfeng

    2017-01-01

    Epilepsy is a chronic disease with transient brain dysfunction that results from the sudden abnormal discharge of neurons in the brain. Since electroencephalogram (EEG) is a harmless and noninvasive detection method, it plays an important role in the detection of neurological diseases. However, the process of analyzing EEG to detect neurological diseases is often difficult because the brain electrical signals are random, non-stationary and nonlinear. In order to overcome such difficulty, this study aims to develop a new computer-aided scheme for automatic epileptic seizure detection in EEGs based on multi-fractal detrended fluctuation analysis (MF-DFA) and support vector machine (SVM). New scheme first extracts features from EEG by MF-DFA during the first stage. Then, the scheme applies a genetic algorithm (GA) to calculate parameters used in SVM and classify the training data according to the selected features using SVM. Finally, the trained SVM classifier is exploited to detect neurological diseases. The algorithm utilizes MLlib from library of SPARK and runs on cloud platform. Applying to a public dataset for experiment, the study results show that the new feature extraction method and scheme can detect signals with less features and the accuracy of the classification reached up to 99%. MF-DFA is a promising approach to extract features for analyzing EEG, because of its simple algorithm procedure and less parameters. The features obtained by MF-DFA can represent samples as well as traditional wavelet transform and Lyapunov exponents. GA can always find useful parameters for SVM with enough execution time. The results illustrate that the classification model can achieve comparable accuracy, which means that it is effective in epileptic seizure detection.

  3. Concept of turbines for ultrasupercritical, supercritical, and subcritical steam conditions

    NASA Astrophysics Data System (ADS)

    Mikhailov, V. E.; Khomenok, L. A.; Pichugin, I. I.; Kovalev, I. A.; Bozhko, V. V.; Vladimirskii, O. A.; Zaitsev, I. V.; Kachuriner, Yu. Ya.; Nosovitskii, I. A.; Orlik, V. G.

    2017-11-01

    The article describes the design features of condensing turbines for ultrasupercritical initial steam conditions (USSC) and large-capacity cogeneration turbines for super- and subcritical steam conditions having increased steam extractions for district heating purposes. For improving the efficiency and reliability indicators of USSC turbines, it is proposed to use forced cooling of the head high-temperature thermally stressed parts of the high- and intermediate-pressure rotors, reaction-type blades of the high-pressure cylinder (HPC) and at least the first stages of the intermediate-pressure cylinder (IPC), the double-wall HPC casing with narrow flanges of its horizontal joints, a rigid HPC rotor, an extended system of regenerative steam extractions without using extractions from the HPC flow path, and the low-pressure cylinder's inner casing moving in accordance with the IPC thermal expansions. For cogeneration turbines, it is proposed to shift the upper district heating extraction (or its significant part) to the feedwater pump turbine, which will make it possible to improve the turbine plant efficiency and arrange both district heating extractions in the IPC. In addition, in the case of using a disengaging coupling or precision conical bolts in the coupling, this solution will make it possible to disconnect the LPC in shifting the turbine to operate in the cogeneration mode. The article points out the need to intensify turbine development efforts with the use of modern methods for improving their efficiency and reliability involving, in particular, the use of relatively short 3D blades, last stages fitted with longer rotor blades, evaporation techniques for removing moisture in the last-stage diaphragm, and LPC rotor blades with radial grooves on their leading edges.

  4. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    PubMed

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Model-based morphological segmentation and labeling of coronary angiograms.

    PubMed

    Haris, K; Efstratiadis, S N; Maglaveras, N; Pappas, C; Gourassas, J; Louridas, G

    1999-10-01

    A method for extraction and labeling of the coronary arterial tree (CAT) using minimal user supervision in single-view angiograms is proposed. The CAT structural description (skeleton and borders) is produced, along with quantitative information for the artery dimensions and assignment of coded labels, based on a given coronary artery model represented by a graph. The stages of the method are: 1) CAT tracking and detection; 2) artery skeleton and border estimation; 3) feature graph creation; and iv) artery labeling by graph matching. The approximate CAT centerline and borders are extracted by recursive tracking based on circular template analysis. The accurate skeleton and borders of each CAT segment are computed, based on morphological homotopy modification and watershed transform. The approximate centerline and borders are used for constructing the artery segment enclosing area (ASEA), where the defined skeleton and border curves are considered as markers. Using the marked ASEA, an artery gradient image is constructed where all the ASEA pixels (except the skeleton ones) are assigned the gradient magnitude of the original image. The artery gradient image markers are imposed as its unique regional minima by the homotopy modification method, the watershed transform is used for extracting the artery segment borders, and the feature graph is updated. Finally, given the created feature graph and the known model graph, a graph matching algorithm assigns the appropriate labels to the extracted CAT using weighted maximal cliques on the association graph corresponding to the two given graphs. Experimental results using clinical digitized coronary angiograms are presented.

  6. Efficient Spatio-Temporal Local Binary Patterns for Spontaneous Facial Micro-Expression Recognition

    PubMed Central

    Wang, Yandan; See, John; Phan, Raphael C.-W.; Oh, Yee-Hui

    2015-01-01

    Micro-expression recognition is still in the preliminary stage, owing much to the numerous difficulties faced in the development of datasets. Since micro-expression is an important affective clue for clinical diagnosis and deceit analysis, much effort has gone into the creation of these datasets for research purposes. There are currently two publicly available spontaneous micro-expression datasets—SMIC and CASME II, both with baseline results released using the widely used dynamic texture descriptor LBP-TOP for feature extraction. Although LBP-TOP is popular and widely used, it is still not compact enough. In this paper, we draw further inspiration from the concept of LBP-TOP that considers three orthogonal planes by proposing two efficient approaches for feature extraction. The compact robust form described by the proposed LBP-Six Intersection Points (SIP) and a super-compact LBP-Three Mean Orthogonal Planes (MOP) not only preserves the essential patterns, but also reduces the redundancy that affects the discriminality of the encoded features. Through a comprehensive set of experiments, we demonstrate the strengths of our approaches in terms of recognition accuracy and efficiency. PMID:25993498

  7. Detection of obstacles on runway using Ego-Motion compensation and tracking of significant features

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar (Principal Investigator); Camps, Octavia (Principal Investigator); Gandhi, Tarak; Devadiga, Sadashiva

    1996-01-01

    This report describes a method for obstacle detection on a runway for autonomous navigation and landing of an aircraft. Detection is done in the presence of extraneous features such as tiremarks. Suitable features are extracted from the image and warping using approximately known camera and plane parameters is performed in order to compensate ego-motion as far as possible. Residual disparity after warping is estimated using an optical flow algorithm. Features are tracked from frame to frame so as to obtain more reliable estimates of their motion. Corrections are made to motion parameters with the residual disparities using a robust method, and features having large residual disparities are signaled as obstacles. Sensitivity analysis of the procedure is also studied. Nelson's optical flow constraint is proposed to separate moving obstacles from stationary ones. A Bayesian framework is used at every stage so that the confidence in the estimates can be determined.

  8. Combining heterogenous features for 3D hand-held object recognition

    NASA Astrophysics Data System (ADS)

    Lv, Xiong; Wang, Shuang; Li, Xiangyang; Jiang, Shuqiang

    2014-10-01

    Object recognition has wide applications in the area of human-machine interaction and multimedia retrieval. However, due to the problem of visual polysemous and concept polymorphism, it is still a great challenge to obtain reliable recognition result for the 2D images. Recently, with the emergence and easy availability of RGB-D equipment such as Kinect, this challenge could be relieved because the depth channel could bring more information. A very special and important case of object recognition is hand-held object recognition, as hand is a straight and natural way for both human-human interaction and human-machine interaction. In this paper, we study the problem of 3D object recognition by combining heterogenous features with different modalities and extraction techniques. For hand-craft feature, although it reserves the low-level information such as shape and color, it has shown weakness in representing hiconvolutionalgh-level semantic information compared with the automatic learned feature, especially deep feature. Deep feature has shown its great advantages in large scale dataset recognition but is not always robust to rotation or scale variance compared with hand-craft feature. In this paper, we propose a method to combine hand-craft point cloud features and deep learned features in RGB and depth channle. First, hand-held object segmentation is implemented by using depth cues and human skeleton information. Second, we combine the extracted hetegerogenous 3D features in different stages using linear concatenation and multiple kernel learning (MKL). Then a training model is used to recognize 3D handheld objects. Experimental results validate the effectiveness and gerneralization ability of the proposed method.

  9. A novel feature extraction approach for microarray data based on multi-algorithm fusion

    PubMed Central

    Jiang, Zhu; Xu, Rong

    2015-01-01

    Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions. PMID:25780277

  10. A novel feature extraction approach for microarray data based on multi-algorithm fusion.

    PubMed

    Jiang, Zhu; Xu, Rong

    2015-01-01

    Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.

  11. deepNF: Deep network fusion for protein function prediction.

    PubMed

    Gligorijevic, Vladimir; Barot, Meet; Bonneau, Richard

    2018-06-01

    The prevalence of high-throughput experimental methods has resulted in an abundance of large-scale molecular and functional interaction networks. The connectivity of these networks provides a rich source of information for inferring functional annotations for genes and proteins. An important challenge has been to develop methods for combining these heterogeneous networks to extract useful protein feature representations for function prediction. Most of the existing approaches for network integration use shallow models that encounter difficulty in capturing complex and highly-nonlinear network structures. Thus, we propose deepNF, a network fusion method based on Multimodal Deep Autoencoders to extract high-level features of proteins from multiple heterogeneous interaction networks. We apply this method to combine STRING networks to construct a common low-dimensional representation containing high-level protein features. We use separate layers for different network types in the early stages of the multimodal autoencoder, later connecting all the layers into a single bottleneck layer from which we extract features to predict protein function. We compare the cross-validation and temporal holdout predictive performance of our method with state-of-the-art methods, including the recently proposed method Mashup. Our results show that our method outperforms previous methods for both human and yeast STRING networks. We also show substantial improvement in the performance of our method in predicting GO terms of varying type and specificity. deepNF is freely available at: https://github.com/VGligorijevic/deepNF. vgligorijevic@flatironinstitute.org, rb133@nyu.edu. Supplementary data are available at Bioinformatics online.

  12. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    PubMed

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Comparative Approach of MRI-Based Brain Tumor Segmentation and Classification Using Genetic Algorithm.

    PubMed

    Bahadure, Nilesh Bhaskarrao; Ray, Arun Kumar; Thethi, Har Pal

    2018-01-17

    The detection of a brain tumor and its classification from modern imaging modalities is a primary concern, but a time-consuming and tedious work was performed by radiologists or clinical supervisors. The accuracy of detection and classification of tumor stages performed by radiologists is depended on their experience only, so the computer-aided technology is very important to aid with the diagnosis accuracy. In this study, to improve the performance of tumor detection, we investigated comparative approach of different segmentation techniques and selected the best one by comparing their segmentation score. Further, to improve the classification accuracy, the genetic algorithm is employed for the automatic classification of tumor stage. The decision of classification stage is supported by extracting relevant features and area calculation. The experimental results of proposed technique are evaluated and validated for performance and quality analysis on magnetic resonance brain images, based on segmentation score, accuracy, sensitivity, specificity, and dice similarity index coefficient. The experimental results achieved 92.03% accuracy, 91.42% specificity, 92.36% sensitivity, and an average segmentation score between 0.82 and 0.93 demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from brain MR images. The experimental results also obtained an average of 93.79% dice similarity index coefficient, which indicates better overlap between the automated extracted tumor regions with manually extracted tumor region by radiologists.

  14. Boosting Classification Accuracy of Diffusion MRI Derived Brain Networks for the Subtypes of Mild Cognitive Impairment Using Higher Order Singular Value Decomposition

    PubMed Central

    Zhan, L.; Liu, Y.; Zhou, J.; Ye, J.; Thompson, P.M.

    2015-01-01

    Mild cognitive impairment (MCI) is an intermediate stage between normal aging and Alzheimer's disease (AD), and around 10-15% of people with MCI develop AD each year. More recently, MCI has been further subdivided into early and late stages, and there is interest in identifying sensitive brain imaging biomarkers that help to differentiate stages of MCI. Here, we focused on anatomical brain networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer's Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying early versus late MCI. PMID:26413202

  15. SU-F-R-52: A Comparison of the Performance of Radiomic Features From Free Breathing and 4DCT Scans in Predicting Disease Recurrence in Lung Cancer SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huynh, E; Coroller, T; Narayan, V

    Purpose: There is a clinical need to identify patients who are at highest risk of recurrence after being treated with stereotactic body radiation therapy (SBRT). Radiomics offers a non-invasive approach by extracting quantitative features from medical images based on tumor phenotype that is predictive of an outcome. Lung cancer patients treated with SBRT routinely undergo free breathing (FB image) and 4DCT (average intensity projection (AIP) image) scans for treatment planning to account for organ motion. The aim of the current study is to evaluate and compare the prognostic performance of radiomic features extracted from FB and AIP images in lungmore » cancer patients treated with SBRT to identify which image type would generate an optimal predictive model for recurrence. Methods: FB and AIP images of 113 Stage I-II NSCLC patients treated with SBRT were analysed. The prognostic performance of radiomic features for distant metastasis (DM) was evaluated by their concordance index (CI). Radiomic features were compared with conventional imaging metrics (e.g. diameter). All p-values were corrected for multiple testing using the false discovery rate. Results: All patients received SBRT and 20.4% of patients developed DM. From each image type (FB or AIP), nineteen radiomic features were selected based on stability and variance. Both image types had five common and fourteen different radiomic features. One FB (CI=0.70) and five AIP (CI range=0.65–0.68) radiomic features were significantly prognostic for DM (p<0.05). None of the conventional features derived from FB images (range CI=0.60–0.61) were significant but all AIP conventional features were (range CI=0.64–0.66). Conclusion: Features extracted from different types of CT scans have varying prognostic performances. AIP images contain more prognostic radiomic features for DM than FB images. These methods can provide personalized medicine approaches at low cost, as FB and AIP data are readily available within a large number of radiation oncology departments. R.M. had consulting interest with Amgen (ended in 2015).« less

  16. Texture analysis based on the Hermite transform for image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Estudillo-Romero, Alfonso; Escalante-Ramirez, Boris; Savage-Carmona, Jesus

    2012-06-01

    Texture analysis has become an important task in image processing because it is used as a preprocessing stage in different research areas including medical image analysis, industrial inspection, segmentation of remote sensed imaginary, multimedia indexing and retrieval. In order to extract visual texture features a texture image analysis technique is presented based on the Hermite transform. Psychovisual evidence suggests that the Gaussian derivatives fit the receptive field profiles of mammalian visual systems. The Hermite transform describes locally basic texture features in terms of Gaussian derivatives. Multiresolution combined with several analysis orders provides detection of patterns that characterizes every texture class. The analysis of the local maximum energy direction and steering of the transformation coefficients increase the method robustness against the texture orientation. This method presents an advantage over classical filter bank design because in the latter a fixed number of orientations for the analysis has to be selected. During the training stage, a subset of the Hermite analysis filters is chosen in order to improve the inter-class separability, reduce dimensionality of the feature vectors and computational cost during the classification stage. We exhaustively evaluated the correct classification rate of real randomly selected training and testing texture subsets using several kinds of common used texture features. A comparison between different distance measurements is also presented. Results of the unsupervised real texture segmentation using this approach and comparison with previous approaches showed the benefits of our proposal.

  17. SOLVENT EXTRACTION PROCESS FOR THE SEPARATION OF URANIUM AND THORIUM FROM PROTACTINIUM AND FISSION PRODUCTS

    DOEpatents

    Rainey, R.H.; Moore, J.G.

    1962-08-14

    A liquid-liquid extraction process was developed for recovering thorium and uranium values from a neutron irradiated thorium composition. They are separated from a solvent extraction system comprising a first end extraction stage for introducing an aqueous feed containing thorium and uranium into the system consisting of a plurality of intermediate extractiorr stages and a second end extractron stage for introducing an aqueous immiscible selective organic solvent for thorium and uranium in countercurrent contact therein with the aqueous feed. A nitrate iondeficient aqueous feed solution containing thorium and uranium was introduced into the first end extraction stage in countercurrent contact with the organic solvent entering the system from the second end extraction stage while intro ducing an aqueous solution of salting nitric acid into any one of the intermediate extraction stages of the system. The resultant thorium and uranium-laden organic solvent was removed at a point preceding the first end extraction stage of the system. (AEC)

  18. Seafloor Topographic Analysis in Staged Ocean Resource Exploration

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Okawa, M.; Osawa, K.; Kadoshima, K.; Asakawa, E.; Sumi, T.

    2017-12-01

    J-MARES (Research and Development Partnership for Next Generation Technology of Marine Resources Survey, JAPAN) has been designing a low-expense and high-efficiency exploration system for seafloor hydrothermal massive sulfide deposits in "Cross-ministerial Strategic Innovation Promotion Program (SIP)" granted by the Cabinet Office, Government of Japan since 2014. We designed a method to focus mineral deposit prospective area in multi-stages (the regional survey, semi-detail survey and detail survey) by extracted topographic features of some well-known seafloor massive sulfide deposits from seafloor topographic analysis using seafloor topographic data acquired by the bathymetric survey. We applied this procedure to an area of interest more than 100km x 100km over Okinawa Trough, including some known seafloor massive sulfide deposits. In Addition, we tried to create a three-dimensional model of seafloor topography by SfM (Structure from Motion) technique using multiple image data of Chimney distributed around well-known seafloor massive sulfide deposit taken with Hi-Vision camera mounted on ROV in detail survey such as geophysical exploration. Topographic features of Chimney was extracted by measuring created three-dimensional model. As the result, it was possible to estimate shape of seafloor sulfide such as Chimney to be mined by three-dimensional model created from image data taken with camera mounted on ROV. In this presentation, we will discuss about focusing mineral deposit prospective area in multi-stages by seafloor topographic analysis using seafloor topographic data in exploration system for seafloor massive sulfide deposit and also discuss about three-dimensional model of seafloor topography created from seafloor image data taken with ROV.

  19. Protein extraction from human anagen head hairs 1-millimeter or less in total length.

    PubMed

    Carlson, Traci L; Moini, Mehdi; Eckenrode, Brian A; Allred, Brent M; Donfack, Joseph

    2018-04-01

    A simple method for extracting protein from human anagen (i.e., actively growing hair stage) head hairs was developed in this study for cases of limited sample availability and/or studies of specific micro-features within a hair. The distinct feature segments of the hair from one donor were divided lengthwise (i.e., each of ∼200-400 μm) and then pooled for three individual hairs to form a total of eight composite hair samples (i.e., each of ∼1 mm or less in total length). The proteins were extracted, digested using trypsin, and characterized via nano-flow liquid chromatography tandem-mass spectrometry (nLC-MS/MS). A total of 63 proteins were identified from all eight protein samples analyzed of which 60% were keratin and keratin-associated proteins. The major hair keratins identified are consistent with previous studies using fluorescence in situ hybridization and nLC-MS/MS while requiring over 400-8000-fold less sample. The protein extraction method from micro-sized human head hairs described in this study will enable proteomic analysis of biological evidence for cases of limited sample availability and will complement hair research. For example, research seeking to develop alternative non-DNA based techniques for comparing questioned to known hairs, and understanding the biochemistry of hair decomposition.

  20. TU-F-CAMPUS-J-02: Evaluation of Textural Feature Extraction for Radiotherapy Response Assessment of Early Stage Breast Cancer Patients Using Diffusion Weighted MRI and Dynamic Contrast Enhanced MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Y; Wang, C; Horton, J

    Purpose: To investigate the feasibility of using classic textural feature extraction in radiotherapy response assessment, we studied a unique cohort of early stage breast cancer patients with paired pre - and post-radiation Diffusion Weighted MRI (DWI-MRI) and Dynamic Contrast Enhanced MRI (DCE-MRI). Methods: 15 female patients from our prospective phase I trial evaluating preoperative radiotherapy were included in this retrospective study. Each patient received a single-fraction radiation treatment, and DWI and DCE scans were conducted before and after the radiotherapy. DWI scans were acquired using a spin-echo EPI sequence with diffusion weighting factors of b = 0 and b =more » 500 mm{sup 2} /s, and the apparent diffusion coefficient (ADC) maps were calculated. DCE-MRI scans were acquired using a T{sub 1}-weighted 3D SPGR sequence with a temporal resolution of about 1 minute. The contrast agent (CA) was intravenously injected with a 0.1 mmol/kg bodyweight dose at 2 ml/s. Two parameters, volume transfer constant (K{sup trans} ) and k{sub ep} were analyzed using the two-compartment Tofts kinetic model. For DCE parametric maps and ADC maps, 33 textural features were generated from the clinical target volume (CTV) in a 3D fashion using the classic gray level co-occurrence matrix (GLCOM) and gray level run length matrix (GLRLM). Wilcoxon signed-rank test was used to determine the significance of each texture feature’s change after the radiotherapy. The significance was set to 0.05 with Bonferroni correction. Results: For ADC maps calculated from DWI-MRI, 24 out of 33 CTV features changed significantly after the radiotherapy. For DCE-MRI pharmacokinetic parameters, all 33 CTV features of K{sup trans} and 33 features of k{sub ep} changed significantly. Conclusion: Initial results indicate that those significantly changed classic texture features are sensitive to radiation-induced changes and can be used for assessment of radiotherapy response in breast cancer.« less

  1. Self-evaluated automatic classifier as a decision-support tool for sleep/wake staging.

    PubMed

    Charbonnier, S; Zoubek, L; Lesecq, S; Chapotot, F

    2011-06-01

    An automatic sleep/wake stages classifier that deals with the presence of artifacts and that provides a confidence index with each decision is proposed. The decision system is composed of two stages: the first stage checks the 20s epoch of polysomnographic signals (EEG, EOG and EMG) for the presence of artifacts and selects the artifact-free signals. The second stage classifies the epoch using one classifier selected out of four, using feature inputs extracted from the artifact-free signals only. A confidence index is associated with each decision made, depending on the classifier used and on the class assigned, so that the user's confidence in the automatic decision is increased. The two-stage system was tested on a large database of 46 night recordings. It reached 85.5% of overall accuracy with improved ability to discern NREM I stage from REM sleep. It was shown that only 7% of the database was classified with a low confidence index, and thus should be re-evaluated by a physiologist expert, which makes the system an efficient decision-support tool. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Two-Stage Separation of V(IV) and Al(III) by Crystallization and Solvent Extraction from Aluminum-Rich Sulfuric Acid Leaching Solution of Stone Coal

    NASA Astrophysics Data System (ADS)

    Shi, Qihua; Zhang, Yimin; Liu, Tao; Huang, Jing; Liu, Hong

    2017-10-01

    To improve separation of V(IV) and Al(III) from aluminum-rich sulfuric acid leaching solution of stone coal, the two-stage separation by crystallization and solvent extraction methods have been developed. A co-extraction coefficient ( k) was put forward to evaluate comprehensively co-extraction extent in different solutions. In the crystallization stage, 68.2% of aluminum can be removed from the solution. In the solvent extraction stage, vanadium was selectively extracted using di-2-ethylhexyl phosphoric acid/tri-n-butyl phosphate from the crystalline mother solution, followed by H2SO4 stripped efficiently. A V2O5 product with purity of 98.39% and only 0.10% Al was obtained after oxidation, precipitation, and calcination. Compared with vanadium extraction from solution without crystallization, the counter-current extraction stage of vanadium can be decreased from 6 to 3 and co-extraction coefficient ( k) decreased from 2.51 to 0.58 with two-stage separation. It is suggested that the aluminum removal by crystallization can evidently weaken the influence of aluminum co-extraction on vanadium extraction and improve the selectivity of solvent extraction for vanadium.

  3. Detection of breast cancer in automated 3D breast ultrasound

    NASA Astrophysics Data System (ADS)

    Tan, Tao; Platel, Bram; Mus, Roel; Karssemeijer, Nico

    2012-03-01

    Automated 3D breast ultrasound (ABUS) is a novel imaging modality, in which motorized scans of the breasts are made with a wide transducer through a membrane under modest compression. The technology has gained high interest and may become widely used in screening of dense breasts, where sensitivity of mammography is poor. ABUS has a high sensitivity for detecting solid breast lesions. However, reading ABUS images is time consuming, and subtle abnormalities may be missed. Therefore, we are developing a computer aided detection (CAD) system to help reduce reading time and errors. In the multi-stage system we propose, segmentations of the breast and nipple are performed, providing landmarks for the detection algorithm. Subsequently, voxel features characterizing coronal spiculation patterns, blobness, contrast, and locations with respect to landmarks are extracted. Using an ensemble of classifiers, a likelihood map indicating potential malignancies is computed. Local maxima in the likelihood map are determined using a local maxima detector and form a set of candidate lesions in each view. These candidates are further processed in a second detection stage, which includes region segmentation, feature extraction and a final classification. Region segmentation is performed using a 3D spiral-scanning dynamic programming method. Region features include descriptors of shape, acoustic behavior and texture. Performance was determined using a 78-patient dataset with 93 images, including 50 malignant lesions. We used 10-fold cross-validation. Using FROC analysis we found that the system obtains a lesion sensitivity of 60% and 70% at 2 and 4 false positives per image respectively.

  4. Semantic and topological classification of images in magnetically guided capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Mewes, P. W.; Rennert, P.; Juloski, A. L.; Lalande, A.; Angelopoulou, E.; Kuth, R.; Hornegger, J.

    2012-03-01

    Magnetically-guided capsule endoscopy (MGCE) is a nascent technology with the goal to allow the steering of a capsule endoscope inside a water filled stomach through an external magnetic field. We developed a classification cascade for MGCE images with groups images in semantic and topological categories. Results can be used in a post-procedure review or as a starting point for algorithms classifying pathologies. The first semantic classification step discards over-/under-exposed images as well as images with a large amount of debris. The second topological classification step groups images with respect to their position in the upper gastrointestinal tract (mouth, esophagus, stomach, duodenum). In the third stage two parallel classifications steps distinguish topologically different regions inside the stomach (cardia, fundus, pylorus, antrum, peristaltic view). For image classification, global image features and local texture features were applied and their performance was evaluated. We show that the third classification step can be improved by a bubble and debris segmentation because it limits feature extraction to discriminative areas only. We also investigated the impact of segmenting intestinal folds on the identification of different semantic camera positions. The results of classifications with a support-vector-machine show the significance of color histogram features for the classification of corrupted images (97%). Features extracted from intestinal fold segmentation lead only to a minor improvement (3%) in discriminating different camera positions.

  5. Extraction of endoscopic images for biomedical figure classification

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; You, Daekeun; Chachra, Suchet; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2015-03-01

    Modality filtering is an important feature in biomedical image searching systems and may significantly improve the retrieval performance of the system. This paper presents a new method for extracting endoscopic image figures from photograph images in biomedical literature, which are found to have highly diverse content and large variability in appearance. Our proposed method consists of three main stages: tissue image extraction, endoscopic image candidate extraction, and ophthalmic image filtering. For tissue image extraction we use image patch level clustering and MRF relabeling to detect images containing skin/tissue regions. Next, we find candidate endoscopic images by exploiting the round shape characteristics that commonly appear in these images. However, this step needs to compensate for images where endoscopic regions are not entirely round. In the third step we filter out the ophthalmic images which have shape characteristics very similar to the endoscopic images. We do this by using text information, specifically, anatomy terms, extracted from the figure caption. We tested and evaluated our method on a dataset of 115,370 photograph figures, and achieved promising precision and recall rates of 87% and 84%, respectively.

  6. Correlative feature analysis of FFDM images

    NASA Astrophysics Data System (ADS)

    Yuan, Yading; Giger, Maryellen L.; Li, Hui; Sennett, Charlene

    2008-03-01

    Identifying the corresponding image pair of a lesion is an essential step for combining information from different views of the lesion to improve the diagnostic ability of both radiologists and CAD systems. Because of the non-rigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this study, we present a computerized framework that differentiates the corresponding images from different views of a lesion from non-corresponding ones. A dual-stage segmentation method, which employs an initial radial gradient index(RGI) based segmentation and an active contour model, was initially applied to extract mass lesions from the surrounding tissues. Then various lesion features were automatically extracted from each of the two views of each lesion to quantify the characteristics of margin, shape, size, texture and context of the lesion, as well as its distance to nipple. We employed a two-step method to select an effective subset of features, and combined it with a BANN to obtain a discriminant score, which yielded an estimate of the probability that the two images are of the same physical lesion. ROC analysis was used to evaluate the performance of the individual features and the selected feature subset in the task of distinguishing between corresponding and non-corresponding pairs. By using a FFDM database with 124 corresponding image pairs and 35 non-corresponding pairs, the distance feature yielded an AUC (area under the ROC curve) of 0.8 with leave-one-out evaluation by lesion, and the feature subset, which includes distance feature, lesion size and lesion contrast, yielded an AUC of 0.86. The improvement by using multiple features was statistically significant as compared to single feature performance. (p<0.001)

  7. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  8. Research on the feature extraction and pattern recognition of the distributed optical fiber sensing signal

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan

    2014-09-01

    In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.

  9. Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures

    NASA Astrophysics Data System (ADS)

    Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.

    2013-05-01

    An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.

  10. Extraction and classification of 3D objects from volumetric CT data

    NASA Astrophysics Data System (ADS)

    Song, Samuel M.; Kwon, Junghyun; Ely, Austin; Enyeart, John; Johnson, Chad; Lee, Jongkyu; Kim, Namho; Boyd, Douglas P.

    2016-05-01

    We propose an Automatic Threat Detection (ATD) algorithm for Explosive Detection System (EDS) using our multistage Segmentation Carving (SC) followed by Support Vector Machine (SVM) classifier. The multi-stage Segmentation and Carving (SC) step extracts all suspect 3-D objects. The feature vector is then constructed for all extracted objects and the feature vector is classified by the Support Vector Machine (SVM) previously learned using a set of ground truth threat and benign objects. The learned SVM classifier has shown to be effective in classification of different types of threat materials. The proposed ATD algorithm robustly deals with CT data that are prone to artifacts due to scatter, beam hardening as well as other systematic idiosyncrasies of the CT data. Furthermore, the proposed ATD algorithm is amenable for including newly emerging threat materials as well as for accommodating data from newly developing sensor technologies. Efficacy of the proposed ATD algorithm with the SVM classifier is demonstrated by the Receiver Operating Characteristics (ROC) curve that relates Probability of Detection (PD) as a function of Probability of False Alarm (PFA). The tests performed using CT data of passenger bags shows excellent performance characteristics.

  11. Robot acting on moving bodies (RAMBO): Preliminary results

    NASA Technical Reports Server (NTRS)

    Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madju; Harwood, David

    1989-01-01

    A robot system called RAMBO is being developed. It is equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a moving object. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations nearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enchancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows the use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using parametric cubic splines between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.

  12. A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement

    PubMed Central

    Hao, Yansong; Song, Liuyang; Tang, Gang; Yuan, Hongfang

    2018-01-01

    Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency. PMID:29597280

  13. A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.

    PubMed

    Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang

    2018-03-28

    Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.

  14. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  15. Bimodal spectroscopic evaluation of ultra violet-irradiated mouse skin inflammatory and precancerous stages: instrumentation, spectral feature extraction/selection and classification (k-NN, LDA and SVM)

    NASA Astrophysics Data System (ADS)

    Díaz-Ayil, G.; Amouroux, M.; Blondel, W. C. P. M.; Bourg-Heckly, G.; Leroux, A.; Guillemin, F.; Granjon, Y.

    2009-07-01

    This paper deals with the development and application of in vivo spatially-resolved bimodal spectroscopy (AutoFluorescence AF and Diffuse Reflectance DR), to discriminate various stages of skin precancer in a preclinical model (UV-irradiated mouse): Compensatory Hyperplasia CH, Atypical Hyperplasia AH and Dysplasia D. A programmable instrumentation was developed for acquiring AF emission spectra using 7 excitation wavelengths: 360, 368, 390, 400, 410, 420 and 430 nm, and DR spectra in the 390-720 nm wavelength range. After various steps of intensity spectra preprocessing (filtering, spectral correction and intensity normalization), several sets of spectral characteristics were extracted and selected based on their discrimination power statistically tested for every pair-wise comparison of histological classes. Data reduction with Principal Components Analysis (PCA) was performed and 3 classification methods were implemented (k-NN, LDA and SVM), in order to compare diagnostic performance of each method. Diagnostic performance was studied and assessed in terms of sensitivity (Se) and specificity (Sp) as a function of the selected features, of the combinations of 3 different inter-fibers distances and of the numbers of principal components, such that: Se and Sp ≈ 100% when discriminating CH vs. others; Sp ≈ 100% and Se > 95% when discriminating Healthy vs. AH or D; Sp ≈ 74% and Se ≈ 63%for AH vs. D.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmon, S; Jeraj, R; Galavis, P

    Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patchesmore » along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity and correlation of various texture features were shown to significantly differ between 2D and 3D extraction. Additionally, inter-feature correlations were more sensitive to reconstruction variation using single-plane extraction. This work highlights a need for standardized feature extraction/selection techniques in radiomics.« less

  17. Target recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  18. Brain tumour classification and abnormality detection using neuro-fuzzy technique and Otsu thresholding.

    PubMed

    Renjith, Arokia; Manjula, P; Mohan Kumar, P

    2015-01-01

    Brain tumour is one of the main causes for an increase in transience among children and adults. This paper proposes an improved method based on Magnetic Resonance Imaging (MRI) brain image classification and image segmentation approach. Automated classification is encouraged by the need of high accuracy when dealing with a human life. The detection of the brain tumour is a challenging problem, due to high diversity in tumour appearance and ambiguous tumour boundaries. MRI images are chosen for detection of brain tumours, as they are used in soft tissue determinations. First of all, image pre-processing is used to enhance the image quality. Second, dual-tree complex wavelet transform multi-scale decomposition is used to analyse texture of an image. Feature extraction extracts features from an image using gray-level co-occurrence matrix (GLCM). Then, the Neuro-Fuzzy technique is used to classify the stages of brain tumour as benign, malignant or normal based on texture features. Finally, tumour location is detected using Otsu thresholding. The classifier performance is evaluated based on classification accuracies. The simulated results show that the proposed classifier provides better accuracy than previous method.

  19. Vertical Feature Mask Feature Classification Flag Extraction

    Atmospheric Science Data Center

    2013-03-28

      Vertical Feature Mask Feature Classification Flag Extraction This routine demonstrates extraction of the ... in a CALIPSO Lidar Level 2 Vertical Feature Mask feature classification flag value. It is written in Interactive Data Language (IDL) ...

  20. Association between dynamic features of breast DCE-MR imaging and clinical response of neoadjuvant chemotherapy: a preliminary analysis

    NASA Astrophysics Data System (ADS)

    Huang, Lijuan; Fan, Ming; Li, Lihua; Zhang, Juan; Shao, Guoliang; Zheng, Bin

    2016-03-01

    Neoadjuvant chemotherapy (NACT) is being used increasingly in the management of patients with breast cancer for systemically reducing the size of primary tumor before surgery in order to improve survival. The clinical response of patients to NACT is correlated with reduced or abolished of their primary tumor, which is important for treatment in the next stage. Recently, the dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is used for evaluation of the response of patients to NACT. To measure this correlation, we extracted the dynamic features from the DCE- MRI and performed association analysis between these features and the clinical response to NACT. In this study, 59 patients are screened before NATC, of which 47 are complete or partial response, and 12 are no response. We segmented the breast areas depicted on each MR image by a computer-aided diagnosis (CAD) scheme, registered images acquired from the sequential MR image scan series, and calculated eighteen features extracted from DCE-MRI. We performed SVM with the 18 features for classification between patients of response and no response. Furthermore, 6 of the 18 features are selected to refine the classification by using Genetic Algorithm. The accuracy, sensitivity and specificity are 87%, 95.74% and 50%, respectively. The calculated area under a receiver operating characteristic (ROC) curve is 0.79+/-0.04. This study indicates that the features of DCE-MRI of breast cancer are associated with the response of NACT. Therefore, our method could be helpful for evaluation of NACT in treatment of breast cancer.

  1. Automated breast tissue density assessment using high order regional texture descriptors in mammography

    NASA Astrophysics Data System (ADS)

    Law, Yan Nei; Lieng, Monica Keiko; Li, Jingmei; Khoo, David Aik-Aun

    2014-03-01

    Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.

  2. Utilizing Chinese Admission Records for MACE Prediction of Acute Coronary Syndrome

    PubMed Central

    Hu, Danqing; Huang, Zhengxing; Chan, Tak-Ming; Dong, Wei; Lu, Xudong; Duan, Huilong

    2016-01-01

    Background: Clinical major adverse cardiovascular event (MACE) prediction of acute coronary syndrome (ACS) is important for a number of applications including physician decision support, quality of care assessment, and efficient healthcare service delivery on ACS patients. Admission records, as typical media to contain clinical information of patients at the early stage of their hospitalizations, provide significant potential to be explored for MACE prediction in a proactive manner. Methods: We propose a hybrid approach for MACE prediction by utilizing a large volume of admission records. Firstly, both a rule-based medical language processing method and a machine learning method (i.e., Conditional Random Fields (CRFs)) are developed to extract essential patient features from unstructured admission records. After that, state-of-the-art supervised machine learning algorithms are applied to construct MACE prediction models from data. Results: We comparatively evaluate the performance of the proposed approach on a real clinical dataset consisting of 2930 ACS patient samples collected from a Chinese hospital. Our best model achieved 72% AUC in MACE prediction. In comparison of the performance between our models and two well-known ACS risk score tools, i.e., GRACE and TIMI, our learned models obtain better performances with a significant margin. Conclusions: Experimental results reveal that our approach can obtain competitive performance in MACE prediction. The comparison of classifiers indicates the proposed approach has a competitive generality with datasets extracted by different feature extraction methods. Furthermore, our MACE prediction model obtained a significant improvement by comparison with both GRACE and TIMI. It indicates that using admission records can effectively provide MACE prediction service for ACS patients at the early stage of their hospitalizations. PMID:27649220

  3. A mathematical theory of shape and neuro-fuzzy methodology-based diagnostic analysis: a comparative study on early detection and treatment planning of brain cancer.

    PubMed

    Kar, Subrata; Majumder, D Dutta

    2017-08-01

    Investigation of brain cancer can detect the abnormal growth of tissue in the brain using computed tomography (CT) scans and magnetic resonance (MR) images of patients. The proposed method classifies brain cancer on shape-based feature extraction as either benign or malignant. The authors used input variables such as shape distance (SD) and shape similarity measure (SSM) in fuzzy tools, and used fuzzy rules to evaluate the risk status as an output variable. We presented a classifier neural network system (NNS), namely Levenberg-Marquardt (LM), which is a feed-forward back-propagation learning algorithm used to train the NN for the status of brain cancer, if any, and which achieved satisfactory performance with 100% accuracy. The proposed methodology is divided into three phases. First, we find the region of interest (ROI) in the brain to detect the tumors using CT and MR images. Second, we extract the shape-based features, like SD and SSM, and grade the brain tumors as benign or malignant with the concept of SD function and SSM as shape-based parameters. Third, we classify the brain cancers using neuro-fuzzy tools. In this experiment, we used a 16-sample database with SSM (μ) values and classified the benignancy or malignancy of the brain tumor lesions using the neuro-fuzzy system (NFS). We have developed a fuzzy expert system (FES) and NFS for early detection of brain cancer from CT and MR images. In this experiment, shape-based features, such as SD and SSM, were extracted from the ROI of brain tumor lesions. These shape-based features were considered as input variables and, using fuzzy rules, we were able to evaluate brain cancer risk values for each case. We used an NNS with LM, a feed-forward back-propagation learning algorithm, as a classifier for the diagnosis of brain cancer and achieved satisfactory performance with 100% accuracy. The proposed network was trained with MR image datasets of 16 cases. The 16 cases were fed to the ANN with 2 input neurons, one hidden layer of 10 neurons and 2 output neurons. Of the 16-sample database, 10 datasets for training, 3 datasets for validation, and 3 datasets for testing were used in the ANN classification system. From the SSM (µ) confusion matrix, the number of output datasets of true positive, false positive, true negative and false negative was 6, 0, 10, and 0, respectively. The sensitivity, specificity and accuracy were each equal to 100%. The method of diagnosing brain cancer presented in this study is a successful model to assist doctors in the screening and treatment of brain cancer patients. The presented FES successfully identified the presence of brain cancer in CT and MR images using the extracted shape-based features and the use of NFS for the identification of brain cancer in the early stages. From the analysis and diagnosis of the disease, the doctors can decide the stage of cancer and take the necessary steps for more accurate treatment. Here, we have presented an investigation and comparison study of the shape-based feature extraction method with the use of NFS for classifying brain tumors as showing normal or abnormal patterns. The results have proved that the shape-based features with the use of NFS can achieve a satisfactory performance with 100% accuracy. We intend to extend this methodology for the early detection of cancer in other regions such as the prostate region and human cervix.

  4. Iris recognition based on key image feature extraction.

    PubMed

    Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y

    2008-01-01

    In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.

  5. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods aremore » applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross-validation methodology. A previously developed B-spline active rays segmentation method was also considered for comparison purposes. Results: Interobserver and intraobserver segmentation agreements (median and [25%, 75%] quartile range) were substantial with respect to the distance metrics HDIST{sub cluster} (2.3 [1.8, 2.9] and 2.5 [2.1, 3.2] pixels) and AMINDIST{sub cluster} (0.8 [0.6, 1.0] and 1.0 [0.8, 1.2] pixels), while moderate with respect to AOM{sub cluster} (0.64 [0.55, 0.71] and 0.59 [0.52, 0.66]). The proposed segmentation method outperformed (0.80 ± 0.04) statistically significantly (Mann-Whitney U-test, p < 0.05) the B-spline active rays segmentation method (0.69 ± 0.04), suggesting the significance of the proposed semiautomated method. Conclusions: Results indicate a reliable semiautomated segmentation method for MC clusters offered by deformable models, which could be utilized in MC cluster quantitative image analysis.« less

  6. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isa, Nor Ashidi Mat

    The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability ofmore » the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.« less

  7. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    NASA Astrophysics Data System (ADS)

    Isa, Nor Ashidi Mat

    2015-05-01

    The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability of the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.

  8. Experience improves feature extraction in Drosophila.

    PubMed

    Peng, Yueqing; Xi, Wang; Zhang, Wei; Zhang, Ke; Guo, Aike

    2007-05-09

    Previous exposure to a pattern in the visual scene can enhance subsequent recognition of that pattern in many species from honeybees to humans. However, whether previous experience with a visual feature of an object, such as color or shape, can also facilitate later recognition of that particular feature from multiple visual features is largely unknown. Visual feature extraction is the ability to select the key component from multiple visual features. Using a visual flight simulator, we designed a novel protocol for visual feature extraction to investigate the effects of previous experience on visual reinforcement learning in Drosophila. We found that, after conditioning with a visual feature of objects among combinatorial shape-color features, wild-type flies exhibited poor ability to extract the correct visual feature. However, the ability for visual feature extraction was greatly enhanced in flies trained previously with that visual feature alone. Moreover, we demonstrated that flies might possess the ability to extract the abstract category of "shape" but not a particular shape. Finally, this experience-dependent feature extraction is absent in flies with defective MBs, one of the central brain structures in Drosophila. Our results indicate that previous experience can enhance visual feature extraction in Drosophila and that MBs are required for this experience-dependent visual cognition.

  9. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective. PMID:25207870

  10. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm is feasible and effective.

  11. Latent Dirichlet Allocation (LDA) Model and kNN Algorithm to Classify Research Project Selection

    NASA Astrophysics Data System (ADS)

    Safi’ie, M. A.; Utami, E.; Fatta, H. A.

    2018-03-01

    Universitas Sebelas Maret has a teaching staff more than 1500 people, and one of its tasks is to carry out research. In the other side, the funding support for research and service is limited, so there is need to be evaluated to determine the Research proposal submission and devotion on society (P2M). At the selection stage, research proposal documents are collected as unstructured data and the data stored is very large. To extract information contained in the documents therein required text mining technology. This technology applied to gain knowledge to the documents by automating the information extraction. In this articles we use Latent Dirichlet Allocation (LDA) to the documents as a model in feature extraction process, to get terms that represent its documents. Hereafter we use k-Nearest Neighbour (kNN) algorithm to classify the documents based on its terms.

  12. Text feature extraction based on deep learning: a review.

    PubMed

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  13. Automated Extraction and Classification of Cancer Stage Mentions fromUnstructured Text Fields in a Central Cancer Registry

    PubMed Central

    AAlAbdulsalam, Abdulrahman K.; Garvin, Jennifer H.; Redd, Andrew; Carter, Marjorie E.; Sweeny, Carol; Meystre, Stephane M.

    2018-01-01

    Cancer stage is one of the most important prognostic parameters in most cancer subtypes. The American Joint Com-mittee on Cancer (AJCC) specifies criteria for staging each cancer type based on tumor characteristics (T), lymph node involvement (N), and tumor metastasis (M) known as TNM staging system. Information related to cancer stage is typically recorded in clinical narrative text notes and other informal means of communication in the Electronic Health Record (EHR). As a result, human chart-abstractors (known as certified tumor registrars) have to search through volu-minous amounts of text to extract accurate stage information and resolve discordance between different data sources. This study proposes novel applications of natural language processing and machine learning to automatically extract and classify TNM stage mentions from records at the Utah Cancer Registry. Our results indicate that TNM stages can be extracted and classified automatically with high accuracy (extraction sensitivity: 95.5%–98.4% and classification sensitivity: 83.5%–87%). PMID:29888032

  14. Automated Extraction and Classification of Cancer Stage Mentions fromUnstructured Text Fields in a Central Cancer Registry.

    PubMed

    AAlAbdulsalam, Abdulrahman K; Garvin, Jennifer H; Redd, Andrew; Carter, Marjorie E; Sweeny, Carol; Meystre, Stephane M

    2018-01-01

    Cancer stage is one of the most important prognostic parameters in most cancer subtypes. The American Joint Com-mittee on Cancer (AJCC) specifies criteria for staging each cancer type based on tumor characteristics (T), lymph node involvement (N), and tumor metastasis (M) known as TNM staging system. Information related to cancer stage is typically recorded in clinical narrative text notes and other informal means of communication in the Electronic Health Record (EHR). As a result, human chart-abstractors (known as certified tumor registrars) have to search through volu-minous amounts of text to extract accurate stage information and resolve discordance between different data sources. This study proposes novel applications of natural language processing and machine learning to automatically extract and classify TNM stage mentions from records at the Utah Cancer Registry. Our results indicate that TNM stages can be extracted and classified automatically with high accuracy (extraction sensitivity: 95.5%-98.4% and classification sensitivity: 83.5%-87%).

  15. [Artificial intelligence in sleep analysis (ARTISANA)--modelling visual processes in sleep classification].

    PubMed

    Schwaibold, M; Schöller, B; Penzel, T; Bolz, A

    2001-05-01

    We describe a novel approach to the problem of automated sleep stage recognition. The ARTISANA algorithm mimics the behaviour of a human expert visually scoring sleep stages (Rechtschaffen and Kales classification). It comprises a number of interacting components that imitate the stepwise approach of the human expert, and artificial intelligence components. On the basis of parameters extracted at 1-s intervals from the signal curves, artificial neural networks recognize the incidence of typical patterns, e.g. delta activity or K complexes. This is followed by a rule interpretation stage that identifies the sleep stage with the aid of a neuro-fuzzy system while taking account of the context. Validation studies based on the records of 8 patients with obstructive sleep apnoea have confirmed the potential of this approach. Further features of the system include the transparency of the decision-taking process, and the flexibility of the option for expanding the system to cover new patterns and criteria.

  16. Deep Learning and Insomnia: Assisting Clinicians With Their Diagnosis.

    PubMed

    Shahin, Mostafa; Ahmed, Beena; Hamida, Sana Tmar-Ben; Mulaffer, Fathima Lamana; Glos, Martin; Penzel, Thomas

    2017-11-01

    Effective sleep analysis is hampered by the lack of automated tools catering to disordered sleep patterns and cumbersome monitoring hardware. In this paper, we apply deep learning on a set of 57 EEG features extracted from a maximum of two EEG channels to accurately differentiate between patients with insomnia or controls with no sleep complaints. We investigated two different approaches to achieve this. The first approach used EEG data from the whole sleep recording irrespective of the sleep stage (stage-independent classification), while the second used only EEG data from insomnia-impacted specific sleep stages (stage-dependent classification). We trained and tested our system using both healthy and disordered sleep collected from 41 controls and 42 primary insomnia patients. When compared with manual assessments, an NREM + REM based classifier had an overall discrimination accuracy of 92% and 86% between two groups using both two and one EEG channels, respectively. These results demonstrate that deep learning can be used to assist in the diagnosis of sleep disorders such as insomnia.

  17. Multi-stage circulating fluidized bed syngas cooling

    DOEpatents

    Liu, Guohai; Vimalchand, Pannalal; Guan, Xiaofeng; Peng, WanWang

    2016-10-11

    A method and apparatus for cooling hot gas streams in the temperature range 800.degree. C. to 1600.degree. C. using multi-stage circulating fluid bed (CFB) coolers is disclosed. The invention relates to cooling the hot syngas from coal gasifiers in which the hot syngas entrains substances that foul, erode and corrode heat transfer surfaces upon contact in conventional coolers. The hot syngas is cooled by extracting and indirectly transferring heat to heat transfer surfaces with circulating inert solid particles in CFB syngas coolers. The CFB syngas coolers are staged to facilitate generation of steam at multiple conditions and hot boiler feed water that are necessary for power generation in an IGCC process. The multi-stage syngas cooler can include internally circulating fluid bed coolers, externally circulating fluid bed coolers and hybrid coolers that incorporate features of both internally and externally circulating fluid bed coolers. Higher process efficiencies can be realized as the invention can handle hot syngas from various types of gasifiers without the need for a less efficient precooling step.

  18. Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes

    PubMed Central

    Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.

    2012-01-01

    Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038

  19. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation

    PubMed Central

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-01-01

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments. PMID:27455279

  20. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation.

    PubMed

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-07-22

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments.

  1. Epileptic seizure onset detection based on EEG and ECG data fusion.

    PubMed

    Qaraqe, Marwa; Ismail, Muhammad; Serpedin, Erchin; Zulfi, Haneef

    2016-05-01

    This paper presents a novel method for seizure onset detection using fused information extracted from multichannel electroencephalogram (EEG) and single-channel electrocardiogram (ECG). In existing seizure detectors, the analysis of the nonlinear and nonstationary ECG signal is limited to the time-domain or frequency-domain. In this work, heart rate variability (HRV) extracted from ECG is analyzed using a Matching-Pursuit (MP) and Wigner-Ville Distribution (WVD) algorithm in order to effectively extract meaningful HRV features representative of seizure and nonseizure states. The EEG analysis relies on a common spatial pattern (CSP) based feature enhancement stage that enables better discrimination between seizure and nonseizure features. The EEG-based detector uses logical operators to pool SVM seizure onset detections made independently across different EEG spectral bands. Two fusion systems are adopted. In the first system, EEG-based and ECG-based decisions are directly fused to obtain a final decision. The second fusion system adopts an override option that allows for the EEG-based decision to override the fusion-based decision in the event that the detector observes a string of EEG-based seizure decisions. The proposed detectors exhibit an improved performance, with respect to sensitivity and detection latency, compared with the state-of-the-art detectors. Experimental results demonstrate that the second detector achieves a sensitivity of 100%, detection latency of 2.6s, and a specificity of 99.91% for the MAJ fusion case. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Training of polyp staging systems using mixed imaging modalities.

    PubMed

    Wimmer, Georg; Gadermayr, Michael; Kwitt, Roland; Häfner, Michael; Tamaki, Toru; Yoshida, Shigeto; Tanaka, Shinji; Merhof, Dorit; Uhl, Andreas

    2018-05-04

    In medical image data sets, the number of images is usually quite small. The small number of training samples does not allow to properly train classifiers which leads to massive overfitting to the training data. In this work, we investigate whether increasing the number of training samples by merging datasets from different imaging modalities can be effectively applied to improve predictive performance. Further, we investigate if the extracted features from the employed image representations differ between different imaging modalities and if domain adaption helps to overcome these differences. We employ twelve feature extraction methods to differentiate between non-neoplastic and neoplastic lesions. Experiments are performed using four different classifier training strategies, each with a different combination of training data. The specifically designed setup for these experiments enables a fair comparison between the four training strategies. Combining high definition with high magnification training data and chromoscopic with non-chromoscopic training data partly improved the results. The usage of domain adaptation has only a small effect on the results compared to just using non-adapted training data. Merging datasets from different imaging modalities turned out to be partially beneficial for the case of combining high definition endoscopic data with high magnification endoscopic data and for combining chromoscopic with non-chromoscopic data. NBI and chromoendoscopy on the other hand are mostly too different with respect to the extracted features to combine images of these two modalities for classifier training. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    PubMed

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  4. Integrated feature extraction and selection for neuroimage classification

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Shen, Dinggang

    2009-02-01

    Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.

  5. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    NASA Astrophysics Data System (ADS)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  6. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  7. Continuous EEG signal analysis for asynchronous BCI application.

    PubMed

    Hsu, Wei-Yen

    2011-08-01

    In this study, we propose a two-stage recognition system for continuous analysis of electroencephalogram (EEG) signals. An independent component analysis (ICA) and correlation coefficient are used to automatically eliminate the electrooculography (EOG) artifacts. Based on the continuous wavelet transform (CWT) and Student's two-sample t-statistics, active segment selection then detects the location of active segment in the time-frequency domain. Next, multiresolution fractal feature vectors (MFFVs) are extracted with the proposed modified fractal dimension from wavelet data. Finally, the support vector machine (SVM) is adopted for the robust classification of MFFVs. The EEG signals are continuously analyzed in 1-s segments, and every 0.5 second moves forward to simulate asynchronous BCI works in the two-stage recognition architecture. The segment is first recognized as lifted or not in the first stage, and then is classified as left or right finger lifting at stage two if the segment is recognized as lifting in the first stage. Several statistical analyses are used to evaluate the performance of the proposed system. The results indicate that it is a promising system in the applications of asynchronous BCI work.

  8. Analysis of essential oils from Voacanga africana seeds at different hydrodistillation extraction stages: chemical composition, antioxidant activity and antimicrobial activity.

    PubMed

    Liu, Xiong; Yang, Dongliang; Liu, Jiajia; Ren, Na

    2015-01-01

    In this study, essential oils from Voacanga africana seeds at different extraction stages were investigated. In the chemical composition analysis, 27 compounds representing 86.69-95.03% of the total essential oils were identified and quantified. The main constituents in essential oils were terpenoids, alcohols and fatty acids accounting for 15.03-24.36%, 21.57-34.43% and 33.06-57.37%, respectively. Moreover, the analysis also revealed that essential oils from different extraction stages possessed different chemical compositions. In the antioxidant evaluation, all analysed oils showed similar antioxidant behaviours, and the concentrations of essential oils providing 50% inhibition of DPPH-scavenging activity (IC50) were about 25 mg/mL. In the antimicrobial experiments, essential oils from different extraction stages exhibited different antimicrobial activities. The antimicrobial activity of oils was affected by extraction stages. By controlling extraction stages, it is promising to obtain essential oils with desired antimicrobial activities.

  9. Non-rigid Reconstruction of Casting Process with Temperature Feature

    NASA Astrophysics Data System (ADS)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Ying; Wang, Lu

    2017-09-01

    Off-line reconstruction of rigid scene has made a great progress in the past decade. However, the on-line reconstruction of non-rigid scene is still a very challenging task. The casting process is a non-rigid reconstruction problem, it is a high-dynamic molding process lacking of geometric features. In order to reconstruct the casting process robustly, an on-line fusion strategy is proposed for dynamic reconstruction of casting process. Firstly, the geometric and flowing feature of casting are parameterized in manner of TSDF (truncated signed distance field) which is a volumetric block, parameterized casting guarantees real-time tracking and optimal deformation of casting process. Secondly, data structure of the volume grid is extended to have temperature value, the temperature interpolation function is build to generate the temperature of each voxel. This data structure allows for dynamic tracking of temperature of casting during deformation stages. Then, the sparse RGB features is extracted from casting scene to search correspondence between geometric representation and depth constraint. The extracted color data guarantees robust tracking of flowing motion of casting. Finally, the optimal deformation of the target space is transformed into a nonlinear regular variational optimization problem. This optimization step achieves smooth and optimal deformation of casting process. The experimental results show that the proposed method can reconstruct the casting process robustly and reduce drift in the process of non-rigid reconstruction of casting.

  10. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites.

    PubMed

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-03-08

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.

  11. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    PubMed Central

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  12. Remote sensing image segmentation using local sparse structure constrained latent low rank representation

    NASA Astrophysics Data System (ADS)

    Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping

    2016-09-01

    Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  13. A framework for feature extraction from hospital medical data with applications in risk prediction.

    PubMed

    Tran, Truyen; Luo, Wei; Phung, Dinh; Gupta, Sunil; Rana, Santu; Kennedy, Richard Lee; Larkins, Ann; Venkatesh, Svetha

    2014-12-30

    Feature engineering is a time consuming component of predictive modeling. We propose a versatile platform to automatically extract features for risk prediction, based on a pre-defined and extensible entity schema. The extraction is independent of disease type or risk prediction task. We contrast auto-extracted features to baselines generated from the Elixhauser comorbidities. Hospital medical records was transformed to event sequences, to which filters were applied to extract feature sets capturing diversity in temporal scales and data types. The features were evaluated on a readmission prediction task, comparing with baseline feature sets generated from the Elixhauser comorbidities. The prediction model was through logistic regression with elastic net regularization. Predictions horizons of 1, 2, 3, 6, 12 months were considered for four diverse diseases: diabetes, COPD, mental disorders and pneumonia, with derivation and validation cohorts defined on non-overlapping data-collection periods. For unplanned readmissions, auto-extracted feature set using socio-demographic information and medical records, outperformed baselines derived from the socio-demographic information and Elixhauser comorbidities, over 20 settings (5 prediction horizons over 4 diseases). In particular over 30-day prediction, the AUCs are: COPD-baseline: 0.60 (95% CI: 0.57, 0.63), auto-extracted: 0.67 (0.64, 0.70); diabetes-baseline: 0.60 (0.58, 0.63), auto-extracted: 0.67 (0.64, 0.69); mental disorders-baseline: 0.57 (0.54, 0.60), auto-extracted: 0.69 (0.64,0.70); pneumonia-baseline: 0.61 (0.59, 0.63), auto-extracted: 0.70 (0.67, 0.72). The advantages of auto-extracted standard features from complex medical records, in a disease and task agnostic manner were demonstrated. Auto-extracted features have good predictive power over multiple time horizons. Such feature sets have potential to form the foundation of complex automated analytic tasks.

  14. Comparative analysis of feature extraction methods in satellite imagery

    NASA Astrophysics Data System (ADS)

    Karim, Shahid; Zhang, Ye; Asif, Muhammad Rizwan; Ali, Saad

    2017-10-01

    Feature extraction techniques are extensively being used in satellite imagery and getting impressive attention for remote sensing applications. The state-of-the-art feature extraction methods are appropriate according to the categories and structures of the objects to be detected. Based on distinctive computations of each feature extraction method, different types of images are selected to evaluate the performance of the methods, such as binary robust invariant scalable keypoints (BRISK), scale-invariant feature transform, speeded-up robust features (SURF), features from accelerated segment test (FAST), histogram of oriented gradients, and local binary patterns. Total computational time is calculated to evaluate the speed of each feature extraction method. The extracted features are counted under shadow regions and preprocessed shadow regions to compare the functioning of each method. We have studied the combination of SURF with FAST and BRISK individually and found very promising results with an increased number of features and less computational time. Finally, feature matching is conferred for all methods.

  15. Geochemistry of radioactive elements in bituminous sands and sandstones of Permian bitumen deposits of Tatarstan (east of the Russian plate)

    NASA Astrophysics Data System (ADS)

    Mullakaev, A. I.; Khasanov, R. R.; Badrutdinov, O. R.; Kamaletdinov, I. R.

    2018-05-01

    The article investigates geochemical features of Permian (Cisuralian, Ufimian Stage and Biarmian, Kazanian Stage of the General Stratigraphic Scale of Russia) bituminous sands and sandstones located on the territory of the Volga-Ural oil and gas province (Republic of Tatarstan). Natural bitumens are extracted using thermal methods as deposits of high-viscosity oils. In the samples studied, the specific activity of natural radionuclides from the 238U (226Ra), 232Th, and 40K series was measured using gamma spectrometry. As a result of the precipitation of uranium and thorium and their subsequent decay, the accumulation of radium (226Ra and 228Ra) has been shown to occur in the bituminous substance. In the process of exploitation of bitumen-bearing rock deposits (as an oil fields) radium in the composition of a water-oil mixture can be extracted to the surface or deposited on sulfate barriers, while being concentrated on the walls of pipes and other equipment. This process requires increased attention to monitoring and inspection the environmental safety of the exploitation procedure.

  16. Hierarchical representation of shapes in visual cortex—from localized features to figural shape segregation

    PubMed Central

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228

  17. Hierarchical representation of shapes in visual cortex-from localized features to figural shape segregation.

    PubMed

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  18. Time series analysis of tool wear in sheet metal stamping using acoustic emission

    NASA Astrophysics Data System (ADS)

    Vignesh Shanbhag, V.; Pereira, P. Michael; Rolfe, F. Bernard; Arunachalam, N.

    2017-09-01

    Galling is an adhesive wear mode that often affects the lifespan of stamping tools. Since stamping tools represent significant economic cost, even a slight improvement in maintenance cost is of high importance for the stamping industry. In other manufacturing industries, online tool condition monitoring has been used to prevent tool wear-related failure. However, monitoring the acoustic emission signal from a stamping process is a non-trivial task since the acoustic emission signal is non-stationary and non-transient. There have been numerous studies examining acoustic emissions in sheet metal stamping. However, very few have focused in detail on how the signals change as wear on the tool surface progresses prior to failure. In this study, time domain analysis was applied to the acoustic emission signals to extract features related to tool wear. To understand the wear progression, accelerated stamping tests were performed using a semi-industrial stamping setup which can perform clamping, piercing, stamping in a single cycle. The time domain features related to stamping were computed for the acoustic emissions signal of each part. The sidewalls of the stamped parts were scanned using an optical profilometer to obtain profiles of the worn part, and they were qualitatively correlated to that of the acoustic emissions signal. Based on the wear behaviour, the wear data can be divided into three stages: - In the first stage, no wear is observed, in the second stage, adhesive wear is likely to occur, and in the third stage severe abrasive plus adhesive wear is likely to occur. Scanning electron microscopy showed the formation of lumps on the stamping tool, which represents galling behavior. Correlation between the time domain features of the acoustic emissions signal and the wear progression identified in this study lays the basis for tool diagnostics in stamping industry.

  19. Dynamic detection-rate-based bit allocation with genuine interval concealment for binary biometric representation.

    PubMed

    Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann

    2013-06-01

    Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.

  20. Characterization of Adrenal Lesions on Unenhanced MRI Using Texture Analysis: A Machine-Learning Approach.

    PubMed

    Romeo, Valeria; Maurea, Simone; Cuocolo, Renato; Petretta, Mario; Mainenti, Pier Paolo; Verde, Francesco; Coppola, Milena; Dell'Aversana, Serena; Brunetti, Arturo

    2018-01-17

    Adrenal adenomas (AA) are the most common benign adrenal lesions, often characterized based on intralesional fat content as either lipid-rich (LRA) or lipid-poor (LPA). The differentiation of AA, particularly LPA, from nonadenoma adrenal lesions (NAL) may be challenging. Texture analysis (TA) can extract quantitative parameters from MR images. Machine learning is a technique for recognizing patterns that can be applied to medical images by identifying the best combination of TA features to create a predictive model for the diagnosis of interest. To assess the diagnostic efficacy of TA-derived parameters extracted from MR images in characterizing LRA, LPA, and NAL using a machine-learning approach. Retrospective, observational study. Sixty MR examinations, including 20 LRA, 20 LPA, and 20 NAL. Unenhanced T 1 -weighted in-phase (IP) and out-of-phase (OP) as well as T 2 -weighted (T 2 -w) MR images acquired at 3T. Adrenal lesions were manually segmented, placing a spherical volume of interest on IP, OP, and T 2 -w images. Different selection methods were trained and tested using the J48 machine-learning classifiers. The feature selection method that obtained the highest diagnostic performance using the J48 classifier was identified; the diagnostic performance was also compared with that of a senior radiologist by means of McNemar's test. A total of 138 TA-derived features were extracted; among these, four features were selected, extracted from the IP (Short_Run_High_Gray_Level_Emphasis), OP (Mean_Intensity and Maximum_3D_Diameter), and T 2 -w (Standard_Deviation) images; the J48 classifier obtained a diagnostic accuracy of 80%. The expert radiologist obtained a diagnostic accuracy of 73%. McNemar's test did not show significant differences in terms of diagnostic performance between the J48 classifier and the expert radiologist. Machine learning conducted on MR TA-derived features is a potential tool to characterize adrenal lesions. 4 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.

  1. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  2. Full Intelligent Cancer Classification of Thermal Breast Images to Assist Physician in Clinical Diagnostic Applications

    PubMed Central

    Lashkari, AmirEhsan; Pak, Fatemeh; Firouzmand, Mohammad

    2016-01-01

    Breast cancer is the most common type of cancer among women. The important key to treat the breast cancer is early detection of it because according to many pathological studies more than 75% – 80% of all abnormalities are still benign at primary stages; so in recent years, many studies and extensive research done to early detection of breast cancer with higher precision and accuracy. Infra-red breast thermography is an imaging technique based on recording temperature distribution patterns of breast tissue. Compared with breast mammography technique, thermography is more suitable technique because it is noninvasive, non-contact, passive and free ionizing radiation. In this paper, a full automatic high accuracy technique for classification of suspicious areas in thermogram images with the aim of assisting physicians in early detection of breast cancer has been presented. Proposed algorithm consists of four main steps: pre-processing & segmentation, feature extraction, feature selection and classification. At the first step, using full automatic operation, region of interest (ROI) determined and the quality of image improved. Using thresholding and edge detection techniques, both right and left breasts separated from each other. Then relative suspected areas become segmented and image matrix normalized due to the uniqueness of each person's body temperature. At feature extraction stage, 23 features, including statistical, morphological, frequency domain, histogram and Gray Level Co-occurrence Matrix (GLCM) based features are extracted from segmented right and left breast obtained from step 1. To achieve the best features, feature selection methods such as minimum Redundancy and Maximum Relevance (mRMR), Sequential Forward Selection (SFS), Sequential Backward Selection (SBS), Sequential Floating Forward Selection (SFFS), Sequential Floating Backward Selection (SFBS) and Genetic Algorithm (GA) have been used at step 3. Finally to classify and TH labeling procedures, different classifiers such as AdaBoost, Support Vector Machine (SVM), k-Nearest Neighbors (kNN), Naïve Bayes (NB) and probability Neural Network (PNN) are assessed to find the best suitable one. These steps are applied on different thermogram images degrees. The results obtained on native database showed the best and significant performance of the proposed algorithm in comprise to the similar studies. According to experimental results, GA combined with AdaBoost with the mean accuracy of 85.33% and 87.42% on the left and right breast images with 0 degree, GA combined with AdaBoost with mean accuracy of 85.17% on the left breast images with 45 degree and mRMR combined with AdaBoost with mean accuracy of 85.15% on the right breast images with 45 degree, and also GA combined with AdaBoost with a mean accuracy of 84.67% and 86.21%, on the left and right breast images with 90 degree, are the best combinations of feature selection and classifier for evaluation of breast images. PMID:27014608

  3. Alzheimer's Disease Early Diagnosis Using Manifold-Based Semi-Supervised Learning.

    PubMed

    Khajehnejad, Moein; Saatlou, Forough Habibollahi; Mohammadzade, Hoda

    2017-08-20

    Alzheimer's disease (AD) is currently ranked as the sixth leading cause of death in the United States and recent estimates indicate that the disorder may rank third, just behind heart disease and cancer, as a cause of death for older people. Clearly, predicting this disease in the early stages and preventing it from progressing is of great importance. The diagnosis of Alzheimer's disease (AD) requires a variety of medical tests, which leads to huge amounts of multivariate heterogeneous data. It can be difficult and exhausting to manually compare, visualize, and analyze this data due to the heterogeneous nature of medical tests; therefore, an efficient approach for accurate prediction of the condition of the brain through the classification of magnetic resonance imaging (MRI) images is greatly beneficial and yet very challenging. In this paper, a novel approach is proposed for the diagnosis of very early stages of AD through an efficient classification of brain MRI images, which uses label propagation in a manifold-based semi-supervised learning framework. We first apply voxel morphometry analysis to extract some of the most critical AD-related features of brain images from the original MRI volumes and also gray matter (GM) segmentation volumes. The features must capture the most discriminative properties that vary between a healthy and Alzheimer-affected brain. Next, we perform a principal component analysis (PCA)-based dimension reduction on the extracted features for faster yet sufficiently accurate analysis. To make the best use of the captured features, we present a hybrid manifold learning framework which embeds the feature vectors in a subspace. Next, using a small set of labeled training data, we apply a label propagation method in the created manifold space to predict the labels of the remaining images and classify them in the two groups of mild Alzheimer's and normal condition (MCI/NC). The accuracy of the classification using the proposed method is 93.86% for the Open Access Series of Imaging Studies (OASIS) database of MRI brain images, providing, compared to the best existing methods, a 3% lower error rate.

  4. Image-based automatic recognition of larvae

    NASA Astrophysics Data System (ADS)

    Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai

    2010-08-01

    As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.

  5. Dosage of 2,6-Bis (1.1-Dimethylethyl)-4-Methylphenol (BHT) in the Plant Extract Mesembryanthemum crystallinum

    PubMed Central

    Ibtissem, Bouftira; Imen, Mgaidi; Souad, Sfar

    2010-01-01

    A naturally occurring BHT was identified in the leaves of the halophyte plant Mesembryanthemum crystallinum. This phenol was extracted in this study by two methods at the different plant growth stages. One of the methods was better for BHT extraction; the concentration of this phenol is plant growth stage dependent. In this study, the floraison stage has the highest BHT concentration. The antioxidant activity of the plant extract was not related to BHT concentration. The higher antioxidant activity is obtained at seedlings stage. PMID:21318161

  6. Gender and the performance of music

    PubMed Central

    Sergeant, Desmond C.; Himonides, Evangelos

    2014-01-01

    This study evaluates propositions that have appeared in the literature that music phenomena are gendered. Were they present in the musical “message,” gendered qualities might be imparted at any of three stages of the music–communication interchange: the process of composition, its realization into sound by the performer, or imposed by the listener in the process of perception. The research was designed to obtain empirical evidence to enable evaluation of claims of the presence of gendering at these three stages. Three research hypotheses were identified and relevant literature of music behaviors and perception reviewed. New instruments of measurement were constructed to test the three hypotheses: (i) two listening sequences each containing 35 extracts from published recordings of compositions of the classical music repertoire, (ii) four “music characteristics” scales, with polarities defined by verbal descriptors designed to assess the dynamic and emotional valence of the musical extracts featured in the listening sequences. 69 musically-trained listeners listened to the two sequences and were asked to identify the sex of the performing artist of each musical extract; a second group of 23 listeners evaluated the extracts applying the four music characteristics scales. Results did not support claims that music structures are inherently gendered, nor proposals that performers impart their own-sex-specific qualities to the music. It is concluded that gendered properties are imposed subjectively by the listener, and these are primarily related to the tempo of the music. PMID:24795663

  7. Using X-Ray In-Line Phase-Contrast Imaging for the Investigation of Nude Mouse Hepatic Tumors

    PubMed Central

    Zhang, Lu; Luo, Shuqian

    2012-01-01

    The purpose of this paper is to report the noninvasive imaging of hepatic tumors without contrast agents. Both normal tissues and tumor tissues can be detected, and tumor tissues in different stages can be classified quantitatively. We implanted BEL-7402 human hepatocellular carcinoma cells into the livers of nude mice and then imaged the livers using X-ray in-line phase-contrast imaging (ILPCI). The projection images' texture feature based on gray level co-occurrence matrix (GLCM) and dual-tree complex wavelet transforms (DTCWT) were extracted to discriminate normal tissues and tumor tissues. Different stages of hepatic tumors were classified using support vector machines (SVM). Images of livers from nude mice sacrificed 6 days after inoculation with cancer cells show diffuse distribution of the tumor tissue, but images of livers from nude mice sacrificed 9, 12, or 15 days after inoculation with cancer cells show necrotic lumps in the tumor tissue. The results of the principal component analysis (PCA) of the texture features based on GLCM of normal regions were positive, but those of tumor regions were negative. The results of PCA of the texture features based on DTCWT of normal regions were greater than those of tumor regions. The values of the texture features in low-frequency coefficient images increased monotonically with the growth of the tumors. Different stages of liver tumors can be classified using SVM, and the accuracy is 83.33%. Noninvasive and micron-scale imaging can be achieved by X-ray ILPCI. We can observe hepatic tumors and small vessels from the phase-contrast images. This new imaging approach for hepatic cancer is effective and has potential use in the early detection and classification of hepatic tumors. PMID:22761929

  8. 718F-FDG PET/CT metabolic tumor parameters and radiomics features in aggressive non-Hodgkin's lymphoma as predictors of treatment outcome and survival.

    PubMed

    Parvez, Aatif; Tau, Noam; Hussey, Douglas; Maganti, Manjula; Metser, Ur

    2018-05-12

    To determine whether metabolic tumor parameters and radiomic features extracted from 18 F-FDG PET/CT (PET) can predict response to therapy and outcome in patients with aggressive B-cell lymphoma. This institutional ethics board-approved retrospective study included 82 patients undergoing PET for aggressive B-cell lymphoma staging. Whole-body metabolic tumor volume (MTV) using various thresholds and tumor radiomic features were assessed on representative tumor sites. The extracted features were correlated with treatment response, disease-free survival (DFS) and overall survival (OS). At the end of therapy, 66 patients (80.5%) had shown complete response to therapy. The parameters correlating with response to therapy were bulky disease > 6 cm at baseline (p = 0.026), absence of a residual mass > 1.5 cm at the end of therapy CT (p = 0.028) and whole-body MTV with best performance using an SUV threshold of 3 and 6 (p = 0.015 and 0.009, respectively). None of the tumor texture features were predictive of first-line therapy response, while a few of them including GLNU correlated with disease-free survival (p = 0.013) and kurtosis correlated with overall survival (p = 0.035). Whole-body MTV correlates with response to therapy in patient with aggressive B-cell lymphoma. Tumor texture features could not predict therapy response, although several features correlated with the presence of a residual mass at the end of therapy CT and others correlated with disease-free and overall survival. These parameters should be prospectively validated in a larger cohort to confirm clinical prognostication.

  9. Informative frame detection from wireless capsule video endoscopic images

    NASA Astrophysics Data System (ADS)

    Bashar, Md. Khayrul; Mori, Kensaku; Suenaga, Yasuhito; Kitasaka, Takayuki; Mekada, Yoshito

    2008-03-01

    Wireless capsule endoscopy (WCE) is a new clinical technology permitting the visualization of the small bowel, the most difficult segment of the digestive tract. The major drawback of this technology is the high amount of time for video diagnosis. In this study, we propose a method for informative frame detection by isolating useless frames that are substantially covered by turbid fluids or their contamination with other materials, e.g., faecal, semi-processed or unabsorbed foods etc. Such materials and fluids present a wide range of colors, from brown to yellow, and/or bubble-like texture patterns. The detection scheme, therefore, consists of two stages: highly contaminated non-bubbled (HCN) frame detection and significantly bubbled (SB) frame detection. Local color moments in the Ohta color space are used to characterize HCN frames, which are isolated by the Support Vector Machine (SVM) classifier in Stage-1. The rest of the frames go to the Stage-2, where Laguerre gauss Circular Harmonic Functions (LG-CHFs) extract the characteristics of the bubble-structures in a multi-resolution framework. An automatic segmentation method is designed to extract the bubbled regions based on local absolute energies of the CHF responses, derived from the grayscale version of the original color image. Final detection of the informative frames is obtained by using threshold operation on the extracted regions. An experiment with 20,558 frames from the three videos shows the excellent average detection accuracy (96.75%) by the proposed method, when compared with the Gabor based- (74.29%) and discrete wavelet based features (62.21%).

  10. Optimization of a Multi-Stage ATR System for Small Target Identification

    NASA Technical Reports Server (NTRS)

    Lin, Tsung-Han; Lu, Thomas; Braun, Henry; Edens, Western; Zhang, Yuhan; Chao, Tien- Hsin; Assad, Christopher; Huntsberger, Terrance

    2010-01-01

    An Automated Target Recognition system (ATR) was developed to locate and target small object in images and videos. The data is preprocessed and sent to a grayscale optical correlator (GOC) filter to identify possible regionsof- interest (ROIs). Next, features are extracted from ROIs based on Principal Component Analysis (PCA) and sent to neural network (NN) to be classified. The features are analyzed by the NN classifier indicating if each ROI contains the desired target or not. The ATR system was found useful in identifying small boats in open sea. However, due to "noisy background," such as weather conditions, background buildings, or water wakes, some false targets are mis-classified. Feedforward backpropagation and Radial Basis neural networks are optimized for generalization of representative features to reduce false-alarm rate. The neural networks are compared for their performance in classification accuracy, classifying time, and training time.

  11. A deep learning framework for financial time series using stacked autoencoders and long-short term memory.

    PubMed

    Bao, Wei; Yue, Jun; Rao, Yulei

    2017-01-01

    The application of deep learning approaches to finance has received a great deal of attention from both investors and researchers. This study presents a novel deep learning framework where wavelet transforms (WT), stacked autoencoders (SAEs) and long-short term memory (LSTM) are combined for stock price forecasting. The SAEs for hierarchically extracted deep features is introduced into stock price forecasting for the first time. The deep learning framework comprises three stages. First, the stock price time series is decomposed by WT to eliminate noise. Second, SAEs is applied to generate deep high-level features for predicting the stock price. Third, high-level denoising features are fed into LSTM to forecast the next day's closing price. Six market indices and their corresponding index futures are chosen to examine the performance of the proposed model. Results show that the proposed model outperforms other similar models in both predictive accuracy and profitability performance.

  12. A new technique for solving puzzles.

    PubMed

    Makridis, Michael; Papamarkos, Nikos

    2010-06-01

    This paper proposes a new technique for solving jigsaw puzzles. The novelty of the proposed technique is that it provides an automatic jigsaw puzzle solution without any initial restriction about the shape of pieces, the number of neighbor pieces, etc. The proposed technique uses both curve- and color-matching similarity features. A recurrent procedure is applied, which compares and merges puzzle pieces in pairs, until the original puzzle image is reformed. Geometrical and color features are extracted on the characteristic points (CPs) of the puzzle pieces. CPs, which can be considered as high curvature points, are detected by a rotationally invariant corner detection algorithm. The features which are associated with color are provided by applying a color reduction technique using the Kohonen self-organized feature map. Finally, a postprocessing stage checks and corrects the relative position between puzzle pieces to improve the quality of the resulting image. Experimental results prove the efficiency of the proposed technique, which can be further extended to deal with even more complex jigsaw puzzle problems.

  13. Texture classification of lung computed tomography images

    NASA Astrophysics Data System (ADS)

    Pheng, Hang See; Shamsuddin, Siti M.

    2013-03-01

    Current development of algorithms in computer-aided diagnosis (CAD) scheme is growing rapidly to assist the radiologist in medical image interpretation. Texture analysis of computed tomography (CT) scans is one of important preliminary stage in the computerized detection system and classification for lung cancer. Among different types of images features analysis, Haralick texture with variety of statistical measures has been used widely in image texture description. The extraction of texture feature values is essential to be used by a CAD especially in classification of the normal and abnormal tissue on the cross sectional CT images. This paper aims to compare experimental results using texture extraction and different machine leaning methods in the classification normal and abnormal tissues through lung CT images. The machine learning methods involve in this assessment are Artificial Immune Recognition System (AIRS), Naive Bayes, Decision Tree (J48) and Backpropagation Neural Network. AIRS is found to provide high accuracy (99.2%) and sensitivity (98.0%) in the assessment. For experiments and testing purpose, publicly available datasets in the Reference Image Database to Evaluate Therapy Response (RIDER) are used as study cases.

  14. ECG Identification System Using Neural Network with Global and Local Features

    ERIC Educational Resources Information Center

    Tseng, Kuo-Kun; Lee, Dachao; Chen, Charles

    2016-01-01

    This paper proposes a human identification system via extracted electrocardiogram (ECG) signals. Two hierarchical classification structures based on global shape feature and local statistical feature is used to extract ECG signals. Global shape feature represents the outline information of ECG signals and local statistical feature extracts the…

  15. Landmark-based deep multi-instance learning for brain disease diagnosis.

    PubMed

    Liu, Mingxia; Zhang, Jun; Adeli, Ehsan; Shen, Dinggang

    2018-01-01

    In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions-of-interest (ROIs), and 2) extracting pre-defined features from each ROI for diagnosis with a certain classifier. However, these pre-defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease-related features. In this paper, we propose a landmark-based deep multi-instance learning (LDMIL) framework for brain disease diagnosis. Specifically, we first adopt a data-driven learning approach to discover disease-related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, our LDMIL framework learns an end-to-end MR image classifier for capturing both the local structural information conveyed by image patches located by landmarks and the global structural information derived from all detected landmarks. We have evaluated our proposed framework on 1526 subjects from three public datasets (i.e., ADNI-1, ADNI-2, and MIRIAD), and the experimental results show that our framework can achieve superior performance over state-of-the-art approaches. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. MicroRNA-21 in laryngeal squamous cell carcinoma: Diagnostic and prognostic features.

    PubMed

    Erkul, Evren; Yilmaz, Ismail; Gungor, Atila; Kurt, Onuralp; Babayigit, Mustafa A

    2017-02-01

    We aimed to determine the microRNA-21 expression in laryngeal squamous cell carcinoma and assess the association between the disease and clinical characteristics of patients. Retrospective case-control study. A retrospective study was conducted from January 2005 to May 2011, in a tertiary hospital following tumor resection in 72 patients with laryngeal squamous cell carcinoma. We used formalin-fixed paraffin-embedded tissue samples of laryngeal squamous cell carcinomas (study group) and adjacent nontumor tissues (control group) for microRNA-21 expressions, and we successfully extracted microRNAs detectable by real-time polymerase chain reaction. All patients were evaluated separately, and the study and control groups were compared. The study group was assessed in terms of localization, smoking, alcohol consumption, lymph node staging, tumor stage, overall survival, disease-free survival, perineural, and vascular invasion. All patients were male, and the average age of patients was 64.2 ± 10.3 years. MicroRNA-21 was upregulated in laryngeal squamous cell carcinomas compared to adjacent nontumor tissues (P = .005). However, the microRNA-21 did not differ significantly according to any clinicopathological features (P > .05). MicroRNA-21 has been found to be expressed at lower levels in early stage (stages 1 and 2) compared with advanced stage (stages 3 and 4), but this was not statistically significant (P = .455). We conclude that the microRNA-21 level may play an important role in diagnosis and serve as a potential biomarker; such measurement thus has clinical applications. However, any possible prognostic associations with microRNA-21 levels should be re-evaluated in future studies on laryngeal squamous cell carcinoma samples amenable to retrospective analysis. NA Laryngoscope, 2016 127:E62-E66, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  17. Characterizing the thermal distributions of warm molecular hydrogen in protoplanetary disks

    NASA Astrophysics Data System (ADS)

    Hoadley, Keri; France, Kevin

    2016-01-01

    Probing the surviving molecular gas within the inner regions of protoplanetary disks (PPDs) around T Tauri stars (1 - 10 Myr) provides insight into the conditions in which planet formation and migration occurs while the gas disk is still present. Recent studies done by Hoadley et al. 2015 and Banzatti & Pontipoddan 2015 suggest that gas in the inner disks of PPDs appear to "respond" to the loss of small dust grains with evolving PPD stage, and IR-CO emission may either be thermally or photo-excited by stellar UV radiation, depending on PPD evolutionary stage. Because far-UV H2 emission lines are dominantly photo-excited by stellar HI-Lyman alpha photons, we observe H2 absorption features against the stellar Lyman alpha wings in a large sample of PPDs at various evolutionary stages. We aim to characterize whether the inner disk H2 environment is in thermal equilibrium at various stages of PPD evolution. We use a sophisticated first-principles approach to fitting multiple absorption features along the red- and blue-wings of the observed stellar Lyman alpha profiles to extract column density estimates of H2 along the line of sight to the target. We find that the high kinetic energy H2 observed in absorption against the LyA wing may be described as a part of the thermal distribution with high kinetic temperature - a potential indication of an inner disk molecular hazy "envelope" around the cooler bulk disk. Ongoing research may help determine the state of the gas and whether it evolves with disk evolutionary stage.

  18. Distant failure prediction for early stage NSCLC by analyzing PET with sparse representation

    NASA Astrophysics Data System (ADS)

    Hao, Hongxia; Zhou, Zhiguo; Wang, Jing

    2017-03-01

    Positron emission tomography (PET) imaging has been widely explored for treatment outcome prediction. Radiomicsdriven methods provide a new insight to quantitatively explore underlying information from PET images. However, it is still a challenging problem to automatically extract clinically meaningful features for prognosis. In this work, we develop a PET-guided distant failure predictive model for early stage non-small cell lung cancer (NSCLC) patients after stereotactic ablative radiotherapy (SABR) by using sparse representation. The proposed method does not need precalculated features and can learn intrinsically distinctive features contributing to classification of patients with distant failure. The proposed framework includes two main parts: 1) intra-tumor heterogeneity description; and 2) dictionary pair learning based sparse representation. Tumor heterogeneity is initially captured through anisotropic kernel and represented as a set of concatenated vectors, which forms the sample gallery. Then, given a test tumor image, its identity (i.e., distant failure or not) is classified by applying the dictionary pair learning based sparse representation. We evaluate the proposed approach on 48 NSCLC patients treated by SABR at our institute. Experimental results show that the proposed approach can achieve an area under the characteristic curve (AUC) of 0.70 with a sensitivity of 69.87% and a specificity of 69.51% using a five-fold cross validation.

  19. Machinery running state identification based on discriminant semi-supervised local tangent space alignment for feature fusion and extraction

    NASA Astrophysics Data System (ADS)

    Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua

    2017-04-01

    Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification.

  20. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Xiaojia; Mao Qirong; Zhan Yongzhao

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions.more » The experiments show that this method can improve the recognition rate and the time of feature extraction.« less

  1. Intelligent Color Vision System for Ripeness Classification of Oil Palm Fresh Fruit Bunch

    PubMed Central

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Halim, Zaini Abdul; Ibrahim, Haidi; Ali, Syed Salim Syed

    2012-01-01

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category. PMID:23202043

  2. Intelligent color vision system for ripeness classification of oil palm fresh fruit bunch.

    PubMed

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Abdul Halim, Zaini; Ibrahim, Haidi; Syed Ali, Syed Salim

    2012-10-22

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category.

  3. Automated facial attendance logger for students

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Kshitish, S.; Kishore, M. R.

    2017-11-01

    From the past two decades, various spheres of activity are in the aspect of ‘Face recognition’ as an essential tool. The complete series of actions of face recognition is composed of 3 stages: Face Detection, Feature Extraction and Recognition. In this paper, we make an effort to put forth a new application of face recognition and detection in education. The proposed system scans the classroom and detects the face of the students in class and matches the scanned face with the templates that is available in the database and updates the attendance of the respective students.

  4. Image segmentation-based robust feature extraction for color image watermarking

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  5. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  6. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.

    1997-01-01

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.

  7. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.

    1997-09-23

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.

  8. Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction

    NASA Astrophysics Data System (ADS)

    Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab

    2017-11-01

    Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.

  9. Information extraction during simultaneous motion processing.

    PubMed

    Rideaux, Reuben; Edwards, Mark

    2014-02-01

    When confronted with multiple moving objects the visual system can process them in two stages: an initial stage in which a limited number of signals are processed in parallel (i.e. simultaneously) followed by a sequential stage. We previously demonstrated that during the simultaneous stage, observers could discriminate between presentations containing up to 5 vs. 6 spatially localized motion signals (Edwards & Rideaux, 2013). Here we investigate what information is actually extracted during the simultaneous stage and whether the simultaneous limit varies with the detail of information extracted. This was achieved by measuring the ability of observers to extract varied information from low detail, i.e. the number of signals presented, to high detail, i.e. the actual directions present and the direction of a specific element, during the simultaneous stage. The results indicate that the resolution of simultaneous processing varies as a function of the information which is extracted, i.e. as the information extraction becomes more detailed, from the number of moving elements to the direction of a specific element, the capacity to process multiple signals is reduced. Thus, when assigning a capacity to simultaneous motion processing, this must be qualified by designating the degree of information extraction. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  10. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach.

    PubMed

    Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar

    2018-02-21

    Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.

  11. Correlative feature analysis on FFDM

    PubMed Central

    Yuan, Yading; Giger, Maryellen L.; Li, Hui; Sennett, Charlene

    2008-01-01

    Identifying the corresponding images of a lesion in different views is an essential step in improving the diagnostic ability of both radiologists and computer-aided diagnosis (CAD) systems. Because of the nonrigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this pilot study, we present a computerized framework that differentiates between corresponding images of the same lesion in different views and noncorresponding images, i.e., images of different lesions. A dual-stage segmentation method, which employs an initial radial gradient index (RGI) based segmentation and an active contour model, is applied to extract mass lesions from the surrounding parenchyma. Then various lesion features are automatically extracted from each of the two views of each lesion to quantify the characteristics of density, size, texture and the neighborhood of the lesion, as well as its distance to the nipple. A two-step scheme is employed to estimate the probability that the two lesion images from different mammographic views are of the same physical lesion. In the first step, a correspondence metric for each pairwise feature is estimated by a Bayesian artificial neural network (BANN). Then, these pairwise correspondence metrics are combined using another BANN to yield an overall probability of correspondence. Receiver operating characteristic (ROC) analysis was used to evaluate the performance of the individual features and the selected feature subset in the task of distinguishing corresponding pairs from noncorresponding pairs. Using a FFDM database with 123 corresponding image pairs and 82 noncorresponding pairs, the distance feature yielded an area under the ROC curve (AUC) of 0.81±0.02 with leave-one-out (by physical lesion) evaluation, and the feature metric subset, which included distance, gradient texture, and ROI-based correlation, yielded an AUC of 0.87±0.02. The improvement by using multiple feature metrics was statistically significant compared to single feature performance. PMID:19175108

  12. Uniform competency-based local feature extraction for remote sensing images

    NASA Astrophysics Data System (ADS)

    Sedaghat, Amin; Mohammadi, Nazila

    2018-01-01

    Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.

  13. Robot Acting on Moving Bodies (RAMBO): Interaction with tumbling objects

    NASA Technical Reports Server (NTRS)

    Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madhu; Harwood, David

    1989-01-01

    Interaction with tumbling objects will become more common as human activities in space expand. Attempting to interact with a large complex object translating and rotating in space, a human operator using only his visual and mental capacities may not be able to estimate the object motion, plan actions or control those actions. A robot system (RAMBO) equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a tumbling object, is being developed. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations rearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enhancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using dynamic interpolations between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.

  14. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  15. Evaluation of aqueous and ethanol extract of bioactive medicinal plant, Cassia didymobotrya (Fresenius) Irwin & Barneby against immature stages of filarial vector, Culex quinquefasciatus Say (Diptera: Culicidae).

    PubMed

    Nagappan, Raja

    2012-09-01

    To evaluate aqueous and ethanol extract of Cassia didymobotrya leaves against immature stages of Culex quinquefasciatus. The mortality rate of immature mosquitoes was tested in wide and narrow range concentration of the plant extract based on WHO standard protocol. The wide range concentration tested in the present study was 10 000, 1 000, 100, 10 and 1 mg/L and narrow range concentration was 50, 100, 150, 200 and 250 mg/L. 2nd instar larvae exposed to 100 mg/L and above concentration of ethanol extract showed 100% mortality. Remaining stages such as 3rd, 4th and pupa, 100% mortality was observed at 1 000 mg/L and above concentration after 24 h exposure period. In aqueous extract all the stages 100% mortality was recorded at 1 000 mg/L and above concentration. In narrow range concentration 2nd instar larvae 100% mortality was observed at 150 mg/L and above concentration of ethanol extract. The remaining stages 100% mortality was recorded at 250 mg/L. In aqueous extract all the tested immature stages 100% mortality was observed at 250 mg/L concentration after 24 h exposure period. The results clearly indicate that the rate of mortality was based dose of the plant extract and stage of the mosquitoes. From this study it is confirmed and concluded that Cassia didymobotrya is having active principle which is responsible for controlling Culex quinquefasciatus. The isolation of bioactive molecules and development of simple formulation technique is important for large scale implementation.

  16. Computer-aided detection of early cancer in the esophagus using HD endoscopy images

    NASA Astrophysics Data System (ADS)

    van der Sommen, Fons; Zinger, Svitlana; Schoon, Erik J.; de With, Peter H. N.

    2013-02-01

    Esophageal cancer is the fastest rising type of cancer in the Western world. The recent development of High-Definition (HD) endoscopy has enabled the specialist physician to identify cancer at an early stage. Nevertheless, it still requires considerable effort and training to be able to recognize these irregularities associated with early cancer. As a first step towards a Computer-Aided Detection (CAD) system that supports the physician in finding these early stages of cancer, we propose an algorithm that is able to identify irregularities in the esophagus automatically, based on HD endoscopic images. The concept employs tile-based processing, so our system is not only able to identify that an endoscopic image contains early cancer, but it can also locate it. The identification is based on the following steps: (1) preprocessing, (2) feature extraction with dimensionality reduction, (3) classification. We evaluate the detection performance in RGB, HSI and YCbCr color space using the Color Histogram (CH) and Gabor features and we compare with other well-known features to describe texture. For classification, we employ a Support Vector Machine (SVM) and evaluate its performance using different parameters and kernel functions. In experiments, our system achieves a classification accuracy of 95.9% on 50×50 pixel tiles of tumorous and normal tissue and reaches an Area Under the Curve (AUC) of 0.990. In 22 clinical examples our algorithm was able to identify all (pre-)cancerous regions and annotate those regions reasonably well. The experimental and clinical validation are considered promising for a CAD system that supports the physician in finding early stage cancer.

  17. Rapid learning in practice: A lung cancer survival decision support system in routine patient care data

    PubMed Central

    Dekker, Andre; Vinod, Shalini; Holloway, Lois; Oberije, Cary; George, Armia; Goozee, Gary; Delaney, Geoff P.; Lambin, Philippe; Thwaites, David

    2016-01-01

    Background and purpose A rapid learning approach has been proposed to extract and apply knowledge from routine care data rather than solely relying on clinical trial evidence. To validate this in practice we deployed a previously developed decision support system (DSS) in a typical, busy clinic for non-small cell lung cancer (NSCLC) patients. Material and methods Gender, age, performance status, lung function, lymph node status, tumor volume and survival were extracted without review from clinical data sources for lung cancer patients. With these data the DSS was tested to predict overall survival. Results 3919 lung cancer patients were identified with 159 eligible for inclusion, due to ineligible histology or stage, non-radical dose, missing tumor volume or survival. The DSS successfully identified a good prognosis group and a medium/poor prognosis group (2 year OS 69% vs. 27/30%, p < 0.001). Stage was less discriminatory (2 year OS 47% for stage I–II vs. 36% for stage IIIA–IIIB, p = 0.12) with most good prognosis patients having higher stage disease. The DSS predicted a large absolute overall survival benefit (~40%) for a radical dose compared to a non-radical dose in patients with a good prognosis, while no survival benefit of radical radiotherapy was predicted for patients with a poor prognosis. Conclusions A rapid learning environment is possible with the quality of clinical data sufficient to validate a DSS. It uses patient and tumor features to identify prognostic groups in whom therapy can be individualized based on predicted outcomes. Especially the survival benefit of a radical versus non-radical dose predicted by the DSS for various prognostic groups has clinical relevance, but needs to be prospectively validated. PMID:25241994

  18. Maximizing Lipid Yield in Neochloris oleoabundans Algae Extraction by Stressing and Using Multiple Extraction Stages with N-Ethylbutylamine as Switchable Solvent

    PubMed Central

    2017-01-01

    The extraction yield of lipids from nonbroken Neochloris oleoabundans was maximized by using multiple extraction stages and using stressed algae. Experimental parameters that affect the extraction were investigated. The study showed that with wet algae (at least) 18 h extraction time was required for maximum yield at room temperature and a solvent/feed ratio of 1:1 (w/w). For fresh water (FW), nonstressed, nonbroken Neochloris oleoabundans, 13.1 wt % of lipid extraction yield (based on dry algae mass) was achieved, which could be improved to 61.3 wt % for FW stressed algae after four extractions, illustrating that a combination of stressing the algae and applying the solvent N-ethylbutylamine in multiple stages of extraction results in almost 5 times higher yield and is very promising for further development of energy-efficient lipid extraction technology targeting nonbroken wet microalgae. PMID:28781427

  19. Broad-beam high-current dc ion source based on a two-stage glow discharge plasma.

    PubMed

    Vizir, A V; Oks, E M; Yushkov, G Yu

    2010-02-01

    We have designed, made, and demonstrated a broad-beam, dc, ion source based on a two-stage, hollow-cathode, and glow discharges plasma. The first-stage discharge (auxiliary discharge) produces electrons that are injected into the cathode cavity of a second-stage discharge (main discharge). The electron injection causes a decrease in the required operating pressure of the main discharge down to 0.05 mTorr and a decrease in required operating voltage down to about 50 V. The decrease in operating voltage of the main discharge leads to a decrease in the fraction of impurity ions in the ion beam extracted from the main gas discharge plasma to less than 0.2%. Another feature of the source is a single-grid accelerating system in which the ion accelerating voltage is applied between the plasma itself and the grid electrode. The source has produced steady-state Ar, O, and N ion beams of about 14 cm diameter and current of more than 2 A at an accelerating voltage of up to 2 kV.

  20. Development of full-field optical spatial coherence tomography system for automated identification of malaria using the multilevel ensemble classifier.

    PubMed

    Singla, Neeru; Srivastava, Vishal; Mehta, Dalip Singh

    2018-05-01

    Malaria is a life-threatening infectious blood disease affecting humans and other animals caused by parasitic protozoans belonging to the Plasmodium type especially in developing countries. The gold standard method for the detection of malaria is through the microscopic method of chemically treated blood smears. We developed an automated optical spatial coherence tomographic system using a machine learning approach for a fast identification of malaria cells. In this study, 28 samples (15 healthy, 13 malaria infected stages of red blood cells) were imaged by the developed system and 13 features were extracted. We designed a multilevel ensemble-based classifier for the quantitative prediction of different stages of the malaria cells. The proposed classifier was used by repeating k-fold cross validation dataset and achieve a high-average accuracy of 97.9% for identifying malaria infected late trophozoite stage of cells. Overall, our proposed system and multilevel ensemble model has a substantial quantifiable potential to detect the different stages of malaria infection without staining or expert. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Heat-Treatment-Responsive Proteins in Different Developmental Stages of Tomato Pollen Detected by Targeted Mass Accuracy Precursor Alignment (tMAPA).

    PubMed

    Chaturvedi, Palak; Doerfler, Hannes; Jegadeesan, Sridharan; Ghatak, Arindam; Pressman, Etan; Castillejo, Maria Angeles; Wienkoop, Stefanie; Egelhofer, Volker; Firon, Nurit; Weckwerth, Wolfram

    2015-11-06

    Recently, we have developed a quantitative shotgun proteomics strategy called mass accuracy precursor alignment (MAPA). The MAPA algorithm uses high mass accuracy to bin mass-to-charge (m/z) ratios of precursor ions from LC-MS analyses, determines their intensities, and extracts a quantitative sample versus m/z ratio data alignment matrix from a multitude of samples. Here, we introduce a novel feature of this algorithm that allows the extraction and alignment of proteotypic peptide precursor ions or any other target peptide from complex shotgun proteomics data for accurate quantification of unique proteins. This strategy circumvents the problem of confusing the quantification of proteins due to indistinguishable protein isoforms by a typical shotgun proteomics approach. We applied this strategy to a comparison of control and heat-treated tomato pollen grains at two developmental stages, post-meiotic and mature. Pollen is a temperature-sensitive tissue involved in the reproductive cycle of plants and plays a major role in fruit setting and yield. By LC-MS-based shotgun proteomics, we identified more than 2000 proteins in total for all different tissues. By applying the targeted MAPA data-processing strategy, 51 unique proteins were identified as heat-treatment-responsive protein candidates. The potential function of the identified candidates in a specific developmental stage is discussed.

  2. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network.

    PubMed

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-04-13

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are 'traffic light ahead' or 'pedestrian crossing' indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications.

  3. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network

    PubMed Central

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-01-01

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are ‘traffic light ahead’ or ‘pedestrian crossing’ indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications. PMID:28406471

  4. Photometric Supernova Classification with Machine Learning

    NASA Astrophysics Data System (ADS)

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  5. Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks

    PubMed Central

    Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni

    2015-01-01

    Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient’s response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a “radiomics” approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models. PMID:26355298

  6. Sample-space-based feature extraction and class preserving projection for gene expression data.

    PubMed

    Wang, Wenjun

    2013-01-01

    In order to overcome the problems of high computational complexity and serious matrix singularity for feature extraction using Principal Component Analysis (PCA) and Fisher's Linear Discrinimant Analysis (LDA) in high-dimensional data, sample-space-based feature extraction is presented, which transforms the computation procedure of feature extraction from gene space to sample space by representing the optimal transformation vector with the weighted sum of samples. The technique is used in the implementation of PCA, LDA, Class Preserving Projection (CPP) which is a new method for discriminant feature extraction proposed, and the experimental results on gene expression data demonstrate the effectiveness of the method.

  7. Nuclear events of apoptosis in vitro in cell-free mitotic extracts: a model system for analysis of the active phase of apoptosis

    PubMed Central

    1993-01-01

    We have developed a cell-free system that induces the morphological transformations characteristic of apoptosis in isolated nuclei. The system uses extracts prepared from mitotic chicken hepatoma cells following a sequential S phase/M phase synchronization. When nuclei are added to these extracts, the chromatin becomes highly condensed into spherical domains that ultimately extrude through the nuclear envelope, forming apoptotic bodies. The process is highly synchronous, and the structural changes are completed within 60 min. Coincident with these morphological changes, the nuclear DNA is cleaved into a nucleosomal ladder. Both processes are inhibited by Zn2+, an inhibitor of apoptosis in intact cells. Nuclear lamina disassembly accompanies these structural changes in added nuclei, and we show that lamina disassembly is a characteristic feature of apoptosis in intact cells of mouse, human and chicken. This system may provide a powerful means of dissecting the biochemical mechanisms underlying the final stages of apoptosis. PMID:8408207

  8. Temporal integration at consecutive processing stages in the auditory pathway of the grasshopper.

    PubMed

    Wirtssohn, Sarah; Ronacher, Bernhard

    2015-04-01

    Temporal integration in the auditory system of locusts was quantified by presenting single clicks and click pairs while performing intracellular recordings. Auditory neurons were studied at three processing stages, which form a feed-forward network in the metathoracic ganglion. Receptor neurons and most first-order interneurons ("local neurons") encode the signal envelope, while second-order interneurons ("ascending neurons") tend to extract more complex, behaviorally relevant sound features. In different neuron types of the auditory pathway we found three response types: no significant temporal integration (some ascending neurons), leaky energy integration (receptor neurons and some local neurons), and facilitatory processes (some local and ascending neurons). The receptor neurons integrated input over very short time windows (<2 ms). Temporal integration on longer time scales was found at subsequent processing stages, indicative of within-neuron computations and network activity. These different strategies, realized at separate processing stages and in parallel neuronal pathways within one processing stage, could enable the grasshopper's auditory system to evaluate longer time windows and thus to implement temporal filters, while at the same time maintaining a high temporal resolution. Copyright © 2015 the American Physiological Society.

  9. Low complexity feature extraction for classification of harmonic signals

    NASA Astrophysics Data System (ADS)

    William, Peter E.

    In this dissertation, feature extraction algorithms have been developed for extraction of characteristic features from harmonic signals. The common theme for all developed algorithms is the simplicity in generating a significant set of features directly from the time domain harmonic signal. The features are a time domain representation of the composite, yet sparse, harmonic signature in the spectral domain. The algorithms are adequate for low-power unattended sensors which perform sensing, feature extraction, and classification in a standalone scenario. The first algorithm generates the characteristic features using only the duration between successive zero-crossing intervals. The second algorithm estimates the harmonics' amplitudes of the harmonic structure employing a simplified least squares method without the need to estimate the true harmonic parameters of the source signal. The third algorithm, resulting from a collaborative effort with Daniel White at the DSP Lab, University of Nebraska-Lincoln, presents an analog front end approach that utilizes a multichannel analog projection and integration to extract the sparse spectral features from the analog time domain signal. Classification is performed using a multilayer feedforward neural network. Evaluation of the proposed feature extraction algorithms for classification through the processing of several acoustic and vibration data sets (including military vehicles and rotating electric machines) with comparison to spectral features shows that, for harmonic signals, time domain features are simpler to extract and provide equivalent or improved reliability over the spectral features in both the detection probabilities and false alarm rate.

  10. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    PubMed

    Zhu, Xiangbin; Qiu, Huiling

    2016-01-01

    Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  11. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections

    PubMed Central

    2016-01-01

    Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved. PMID:27893761

  12. Quantitative 3-D Imaging, Segmentation and Feature Extraction of the Respiratory System in Small Mammals for Computational Biophysics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, Lynn L.; Trease, Harold E.; Fowler, John

    2007-03-15

    One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process.more » The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.« less

  13. Deep Learning Representation from Electroencephalography of Early-Stage Creutzfeldt-Jakob Disease and Features for Differentiation from Rapidly Progressive Dementia.

    PubMed

    Morabito, Francesco Carlo; Campolo, Maurizio; Mammone, Nadia; Versaci, Mario; Franceschetti, Silvana; Tagliavini, Fabrizio; Sofia, Vito; Fatuzzo, Daniela; Gambardella, Antonio; Labate, Angelo; Mumoli, Laura; Tripodi, Giovanbattista Gaspare; Gasparini, Sara; Cianci, Vittoria; Sueri, Chiara; Ferlazzo, Edoardo; Aguglia, Umberto

    2017-03-01

    A novel technique of quantitative EEG for differentiating patients with early-stage Creutzfeldt-Jakob disease (CJD) from other forms of rapidly progressive dementia (RPD) is proposed. The discrimination is based on the extraction of suitable features from the time-frequency representation of the EEG signals through continuous wavelet transform (CWT). An average measure of complexity of the EEG signal obtained by permutation entropy (PE) is also included. The dimensionality of the feature space is reduced through a multilayer processing system based on the recently emerged deep learning (DL) concept. The DL processor includes a stacked auto-encoder, trained by unsupervised learning techniques, and a classifier whose parameters are determined in a supervised way by associating the known category labels to the reduced vector of high-level features generated by the previous processing blocks. The supervised learning step is carried out by using either support vector machines (SVM) or multilayer neural networks (MLP-NN). A subset of EEG from patients suffering from Alzheimer's Disease (AD) and healthy controls (HC) is considered for differentiating CJD patients. When fine-tuning the parameters of the global processing system by a supervised learning procedure, the proposed system is able to achieve an average accuracy of 89%, an average sensitivity of 92%, and an average specificity of 89% in differentiating CJD from RPD. Similar results are obtained for CJD versus AD and CJD versus HC.

  14. Inefficient conjunction search made efficient by concurrent spoken delivery of target identity.

    PubMed

    Reali, Florencia; Spivey, Michael J; Tyler, Melinda J; Terranova, Joseph

    2006-08-01

    Visual search based on a conjunction of two features typically elicits reaction times that increase linearly as a function of the number of distractors, whereas search based on a single feature is essentially unaffected by set size. These and related findings have often been interpreted as evidence of a serial search stage that follows a parallel search stage. However, a wide range of studies has been showing a form of blending of these two processes. For example, when a spoken instruction identifies the conjunction target concurrently with the visual display, the effect of set size is significantly reduced, suggesting that incremental linguistic processing of the first feature adjective and then the second feature adjective may facilitate something approximating a parallel extraction of objects during search for the target. Here, we extend these results to a variety of experimental designs. First, we replicate the result with a mixed-trials design (ruling out potential strategies associated with the blocked design of the original study). Second, in a mixed-trials experiment, the order of adjective types in the spoken query varies randomly across conditions. In a third experiment, we extend the effect to a triple-conjunction search task. A fourth (control) experiment demonstrates that these effects are not due to an efficient odd-one-out search that ignores the linguistic input. This series of experiments, along with attractor-network simulations of the phenomena, provide further evidence toward understanding linguistically mediated influences in real-time visual search processing.

  15. Research on oral test modeling based on multi-feature fusion

    NASA Astrophysics Data System (ADS)

    Shi, Yuliang; Tao, Yiyue; Lei, Jun

    2018-04-01

    In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.

  16. A graph lattice approach to maintaining and learning dense collections of subgraphs as image features.

    PubMed

    Saund, Eric

    2013-10-01

    Effective object and scene classification and indexing depend on extraction of informative image features. This paper shows how large families of complex image features in the form of subgraphs can be built out of simpler ones through construction of a graph lattice—a hierarchy of related subgraphs linked in a lattice. Robustness is achieved by matching many overlapping and redundant subgraphs, which allows the use of inexpensive exact graph matching, instead of relying on expensive error-tolerant graph matching to a minimal set of ideal model graphs. Efficiency in exact matching is gained by exploitation of the graph lattice data structure. Additionally, the graph lattice enables methods for adaptively growing a feature space of subgraphs tailored to observed data. We develop the approach in the domain of rectilinear line art, specifically for the practical problem of document forms recognition. We are especially interested in methods that require only one or very few labeled training examples per category. We demonstrate two approaches to using the subgraph features for this purpose. Using a bag-of-words feature vector we achieve essentially single-instance learning on a benchmark forms database, following an unsupervised clustering stage. Further performance gains are achieved on a more difficult dataset using a feature voting method and feature selection procedure.

  17. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  18. Automated sleep scoring and sleep apnea detection in children

    NASA Astrophysics Data System (ADS)

    Baraglia, David P.; Berryman, Matthew J.; Coussens, Scott W.; Pamula, Yvonne; Kennedy, Declan; Martin, A. James; Abbott, Derek

    2005-12-01

    This paper investigates the automated detection of a patient's breathing rate and heart rate from their skin conductivity as well as sleep stage scoring and breathing event detection from their EEG. The software developed for these tasks is tested on data sets obtained from the sleep disorders unit at the Adelaide Women's and Children's Hospital. The sleep scoring and breathing event detection tasks used neural networks to achieve signal classification. The Fourier transform and the Higuchi fractal dimension were used to extract features for input to the neural network. The filtered skin conductivity appeared visually to bear a similarity to the breathing and heart rate signal, but a more detailed evaluation showed the relation was not consistent. Sleep stage classification was achieved with and accuracy of around 65% with some stages being accurately scored and others poorly scored. The two breathing events hypopnea and apnea were scored with varying degrees of accuracy with the highest scores being around 75% and 30%.

  19. Analysis of x-ray hand images for bone age assessment

    NASA Astrophysics Data System (ADS)

    Serrat, Joan; Vitria, Jordi M.; Villanueva, Juan J.

    1990-09-01

    In this paper we describe a model-based system for the assessment of skeletal maturity on hand radiographs by the TW2 method. The problem consists in classiflying a set of bones appearing in an image in one of several stages described in an atlas. A first approach consisting in pre-processing segmentation and classification independent phases is also presented. However it is only well suited for well contrasted low noise images without superimposed bones were the edge detection by zero crossing of second directional derivatives is able to extract all bone contours maybe with little gaps and few false edges on the background. Hence the use of all available knowledge about the problem domain is needed to build a rather general system. We have designed a rule-based system for narrow down the rank of possible stages for each bone and guide the analysis process. It calls procedures written in conventional languages for matching stage models against the image and getting features needed in the classification process.

  20. Doppler radar fall activity detection using the wavelet transform.

    PubMed

    Su, Bo Yu; Ho, K C; Rantz, Marilyn J; Skubic, Marjorie

    2015-03-01

    We propose in this paper the use of Wavelet transform (WT) to detect human falls using a ceiling mounted Doppler range control radar. The radar senses any motions from falls as well as nonfalls due to the Doppler effect. The WT is very effective in distinguishing the falls from other activities, making it a promising technique for radar fall detection in nonobtrusive inhome elder care applications. The proposed radar fall detector consists of two stages. The prescreen stage uses the coefficients of wavelet decomposition at a given scale to identify the time locations in which fall activities may have occurred. The classification stage extracts the time-frequency content from the wavelet coefficients at many scales to form a feature vector for fall versus nonfall classification. The selection of different wavelet functions is examined to achieve better performance. Experimental results using the data from the laboratory and real inhome environments validate the promising and robust performance of the proposed detector.

  1. Estimation of Wheat Plant Density at Early Stages Using High Resolution Imagery

    PubMed Central

    Liu, Shouyang; Baret, Fred; Andrieu, Bruno; Burger, Philippe; Hemmerlé, Matthieu

    2017-01-01

    Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects) are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over three experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds⋅m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages. PMID:28559901

  2. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  3. Extracting contours of oval-shaped objects by Hough transform and minimal path algorithms

    NASA Astrophysics Data System (ADS)

    Tleis, Mohamed; Verbeek, Fons J.

    2014-04-01

    Circular and oval-like objects are very common in cell and micro biology. These objects need to be analyzed, and to that end, digitized images from the microscope are used so as to come to an automated analysis pipeline. It is essential to detect all the objects in an image as well as to extract the exact contour of each individual object. In this manner it becomes possible to perform measurements on these objects, i.e. shape and texture features. Our measurement objective is achieved by probing contour detection through dynamic programming. In this paper we describe a method that uses Hough transform and two minimal path algorithms to detect contours of (ovoid-like) objects. These algorithms are based on an existing grey-weighted distance transform and a new algorithm to extract the circular shortest path in an image. The methods are tested on an artificial dataset of a 1000 images, with an F1-score of 0.972. In a case study with yeast cells, contours from our methods were compared with another solution using Pratt's figure of merit. Results indicate that our methods were more precise based on a comparison with a ground-truth dataset. As far as yeast cells are concerned, the segmentation and measurement results enable, in future work, to retrieve information from different developmental stages of the cell using complex features.

  4. SU-D-207B-03: A PET-CT Radiomics Comparison to Predict Distant Metastasis in Lung Adenocarcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coroller, T; Yip, S; Lee, S

    2016-06-15

    Purpose: Early prediction of distant metastasis may provide crucial information for adaptive therapy, subsequently improving patient survival. Radiomic features that extracted from PET and CT images have been used for assessing tumor phenotype and predicting clinical outcomes. This study investigates the values of radiomic features in predicting distant metastasis (DM) in non-small cell lung cancer (NSCLC). Methods: A total of 108 patients with stage II–III lung adenocarcinoma were included in this retrospective study. Twenty radiomic features were selected (10 from CT and 10 from PET). Conventional features (metabolic tumor volume, SUV, volume and diameter) were included for comparison. Concordance indexmore » (CI) was used to evaluate features prognostic value. Noether test was used to compute p-value to consider CI significance from random (CI = 0.5) and were adjusted for multiple testing using false rate discovery (FDR). Results: A total of 70 patients had DM (64.8%) with a median time to event of 8.8 months. The median delivered dose was 60 Gy (range 33–68 Gy). None of the conventional features from PET (CI ranged from 0.51 to 0.56) or CT (CI ranged from 0.57 to 0.58) were significant from random. Five radiomics features were significantly prognostic from random for DM (p-values < 0.05). Four were extracted from CT (CI = 0.61 to 0.63, p-value <0.01) and one from PET which was also the most prognostic (CI = 0.64, p-value <0.001). Conclusion: This study demonstrated significant association between radiomic features and DM for patients with locally advanced lung adenocarcinoma. Moreover, conventional (clinically utilized) metrics were not significantly associated with DM. Radiomics can potentially help classify patients at higher risk of DM, allowing clinicians to individualize treatment, such as intensification of chemotherapy) to reduce the risk of DM and improve survival. R.M. has consulting interests with Amgen.« less

  5. DNA extraction and barcode identification of development stages of forensically important flies in the Czech Republic.

    PubMed

    Olekšáková, Tereza; Žurovcová, Martina; Klimešová, Vanda; Barták, Miroslav; Šuláková, Hana

    2018-04-01

    Several methods of DNA extraction, coupled with 'DNA barcoding' species identification, were compared using specimens from early developmental stages of forensically important flies from the Calliphoridae and Sarcophagidae families. DNA was extracted at three immature stages - eggs, the first instar larvae, and empty pupal cases (puparia) - using four different extraction methods, namely, one simple 'homemade' extraction buffer protocol and three commercial kits. The extraction conditions, including the amount of proteinase K and incubation times, were optimized. The simple extraction buffer method was successful for half of the eggs and for the first instar larval samples. The DNA Lego Kit and DEP-25 DNA Extraction Kit were useful for DNA extractions from the first instar larvae samples, and the DNA Lego Kit was also successful regarding the extraction from eggs. The QIAamp DNA mini kit was the most effective; the extraction was successful with regard to all sample types - eggs, larvae, and pupari.

  6. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    PubMed

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  7. PyEEG: an open source Python module for EEG/MEG feature extraction.

    PubMed

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  8. PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction

    PubMed Central

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction. PMID:21512582

  9. Deep feature extraction and combination for synthetic aperture radar target classification

    NASA Astrophysics Data System (ADS)

    Amrani, Moussa; Jiang, Feng

    2017-10-01

    Feature extraction has always been a difficult problem in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very important to select discriminative features to train a classifier, which is a prerequisite. Inspired by the great success of convolutional neural network (CNN), we address the problem of SAR target classification by proposing a feature extraction method, which takes advantage of exploiting the extracted deep features from CNNs on SAR images to introduce more powerful discriminative features and robust representation ability for them. First, the pretrained VGG-S net is fine-tuned on moving and stationary target acquisition and recognition (MSTAR) public release database. Second, after a simple preprocessing is performed, the fine-tuned network is used as a fixed feature extractor to extract deep features from the processed SAR images. Third, the extracted deep features are fused by using a traditional concatenation and a discriminant correlation analysis algorithm. Finally, for target classification, K-nearest neighbors algorithm based on LogDet divergence-based metric learning triplet constraints is adopted as a baseline classifier. Experiments on MSTAR are conducted, and the classification accuracy results demonstrate that the proposed method outperforms the state-of-the-art methods.

  10. Supervised non-negative tensor factorization for automatic hyperspectral feature extraction and target discrimination

    NASA Astrophysics Data System (ADS)

    Anderson, Dylan; Bapst, Aleksander; Coon, Joshua; Pung, Aaron; Kudenov, Michael

    2017-05-01

    Hyperspectral imaging provides a highly discriminative and powerful signature for target detection and discrimination. Recent literature has shown that considering additional target characteristics, such as spatial or temporal profiles, simultaneously with spectral content can greatly increase classifier performance. Considering these additional characteristics in a traditional discriminative algorithm requires a feature extraction step be performed first. An example of such a pipeline is computing a filter bank response to extract spatial features followed by a support vector machine (SVM) to discriminate between targets. This decoupling between feature extraction and target discrimination yields features that are suboptimal for discrimination, reducing performance. This performance reduction is especially pronounced when the number of features or available data is limited. In this paper, we propose the use of Supervised Nonnegative Tensor Factorization (SNTF) to jointly perform feature extraction and target discrimination over hyperspectral data products. SNTF learns a tensor factorization and a classification boundary from labeled training data simultaneously. This ensures that the features learned via tensor factorization are optimal for both summarizing the input data and separating the targets of interest. Practical considerations for applying SNTF to hyperspectral data are presented, and results from this framework are compared to decoupled feature extraction/target discrimination pipelines.

  11. Optimizing the antioxidant activity of Kelakai (Stenochlaena palustris) through multiplestage extraction process

    NASA Astrophysics Data System (ADS)

    Wijaya, Elza; Widiputri, Diah Indriani; Rahmawati, Della

    2017-11-01

    Kelakai is known as traditional remedy for treating several diseases, such as fever, anemia, and stimulate the production of breast milk for breastfeeding mother. Instead of those benefits, kelakai also proved has several kinds of antioxidant properties. Therefore, extracting antioxidant properties from kelakai is one way to discover the amount of antioxidant activity contained in kelakai. In this research, the multiple-stage extraction process was done in order to optimize the antioxidant activity. Moreover, based on data obtained from single stage extraction process, the most suitable condition was discovered. It turns out that the use of milled sample in water solvent for 12 hours at 44°C produce the highest antioxidant activity, which is 919.95 mg to inhibit 50% of DPPH. Referred to the experiment, the antioxidant activity of the extract which gained from multiple-stage was higher than from single stage. Multiple-stage process has proven the increasing of antioxidant activity up to 72.43%, which is need 404 mg to inhibit 50% of DPPH.

  12. Appearance-based human gesture recognition using multimodal features for human computer interaction

    NASA Astrophysics Data System (ADS)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  13. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    PubMed Central

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  14. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    PubMed

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  15. Intelligence, Surveillance, and Reconnaissance Fusion for Coalition Operations

    DTIC Science & Technology

    2008-07-01

    classification of the targets of interest. The MMI features extracted in this manner have two properties that provide a sound justification for...are generalizations of well- known feature extraction methods such as Principal Components Analysis (PCA) and Independent Component Analysis (ICA...augment (without degrading performance) a large class of generic fusion processes. Ontologies Classifications Feature extraction Feature analysis

  16. Molecular characteristics of Illicium verum extractives to activate acquired immune response

    PubMed Central

    Peng, Wanxi; Lin, Zhi; Wang, Lansheng; Chang, Junbo; Gu, Fangliang; Zhu, Xiangwei

    2015-01-01

    Illicium verum, whose extractives can activate the demic acquired immune response, is an expensive medicinal plant. However, the rich extractives in I. verum biomass were seriously wasted for the inefficient extraction and separation processes. In order to further utilize the biomedical resources for the good acquired immune response, the four extractives were obtained by SJYB extraction, and then the immunology moleculars of SJYB extractives were identified and analyzed by GC–MS. The result showed that the first-stage extractives contained 108 components including anethole (40.27%), 4-methoxy-benzaldehyde (4.25%), etc.; the second-stage extractives had 5 components including anethole (84.82%), 2-hydroxy-2-(4-methoxy-phenyl)-n-methyl-acetamide (7.11%), etc.; the third-stage extractives contained one component namely anethole (100%); and the fourth-stage extractives contained 5 components including cyclohexyl-benzene (64.64%), 1-(1-methylethenyl)-3-(1-methylethyl)-benzene (17.17%), etc. The SJYB extractives of I. verum biomass had a main retention time between 10 and 20 min what’s more, the SJYB extractives contained many biomedical moleculars, such as anethole, eucalyptol, [1S-(1α,4aα,10aβ)]-1,2,3,4,4a,9,10,10a-octahydro-1,4a-dimethyl-7-(1-methylethyl)-1-phenanthrenecarboxylic acid, stigmast-4-en-3-one, γ-sitosterol, and so on. So the functional analytical results suggested that the SJYB extractives of I. verum had a function in activating the acquired immune response and a huge potential in biomedicine. PMID:27081359

  17. Extraction of multi-scale landslide morphological features based on local Gi* using airborne LiDAR-derived DEM

    NASA Astrophysics Data System (ADS)

    Shi, Wenzhong; Deng, Susu; Xu, Wenbing

    2018-02-01

    For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (< 5 years) landslides and approximately 35% of historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should be filtered using a filtering strategy based on supplementary information provided by expert knowledge or other data sources.

  18. Image-based Analysis of Emotional Facial Expressions in Full Face Transplants.

    PubMed

    Bedeloglu, Merve; Topcu, Çagdas; Akgul, Arzu; Döger, Ela Naz; Sever, Refik; Ozkan, Ozlenen; Ozkan, Omer; Uysal, Hilmi; Polat, Ovunc; Çolak, Omer Halil

    2018-01-20

    In this study, it is aimed to determine the degree of the development in emotional expression of full face transplant patients from photographs. Hence, a rehabilitation process can be planned according to the determination of degrees as a later work. As envisaged, in full face transplant cases, the determination of expressions can be confused or cannot be achieved as the healthy control group. In order to perform image-based analysis, a control group consist of 9 healthy males and 2 full-face transplant patients participated in the study. Appearance-based Gabor Wavelet Transform (GWT) and Local Binary Pattern (LBP) methods are adopted for recognizing neutral and 6 emotional expressions which consist of angry, scared, happy, hate, confused and sad. Feature extraction was carried out by using both methods and combination of these methods serially. In the performed expressions, the extracted features of the most distinct zones in the facial area where the eye and mouth region, have been used to classify the emotions. Also, the combination of these region features has been used to improve classifier performance. Control subjects and transplant patients' ability to perform emotional expressions have been determined with K-nearest neighbor (KNN) classifier with region-specific and method-specific decision stages. The results have been compared with healthy group. It has been observed that transplant patients don't reflect some emotional expressions. Also, there were confusions among expressions.

  19. Defect detection of castings in radiography images using a robust statistical feature.

    PubMed

    Zhao, Xinyue; He, Zaixing; Zhang, Shuyou

    2014-01-01

    One of the most commonly used optical methods for defect detection is radiographic inspection. Compared with methods that extract defects directly from the radiography image, model-based methods deal with the case of an object with complex structure well. However, detection of small low-contrast defects in nonuniformly illuminated images is still a major challenge for them. In this paper, we present a new method based on the grayscale arranging pairs (GAP) feature to detect casting defects in radiography images automatically. First, a model is built using pixel pairs with a stable intensity relationship based on the GAP feature from previously acquired images. Second, defects can be extracted by comparing the difference of intensity-difference signs between the input image and the model statistically. The robustness of the proposed method to noise and illumination variations has been verified on casting radioscopic images with defects. The experimental results showed that the average computation time of the proposed method in the testing stage is 28 ms per image on a computer with a Pentium Core 2 Duo 3.00 GHz processor. For the comparison, we also evaluated the performance of the proposed method as well as that of the mixture-of-Gaussian-based and crossing line profile methods. The proposed method achieved 2.7% and 2.0% false negative rates in the noise and illumination variation experiments, respectively.

  20. Coarse-to-fine wavelet-based airport detection

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Wang, Shuigen; Pang, Zhaofeng; Zhao, Baojun

    2015-10-01

    Airport detection on optical remote sensing images has attracted great interest in the applications of military optics scout and traffic control. However, most of the popular techniques for airport detection from optical remote sensing images have three weaknesses: 1) Due to the characteristics of optical images, the detection results are often affected by imaging conditions, like weather situation and imaging distortion; and 2) optical images contain comprehensive information of targets, so that it is difficult for extracting robust features (e.g., intensity and textural information) to represent airport area; 3) the high resolution results in large data volume, which makes real-time processing limited. Most of the previous works mainly focus on solving one of those problems, and thus, the previous methods cannot achieve the balance of performance and complexity. In this paper, we propose a novel coarse-to-fine airport detection framework to solve aforementioned three issues using wavelet coefficients. The framework includes two stages: 1) an efficient wavelet-based feature extraction is adopted for multi-scale textural feature representation, and support vector machine(SVM) is exploited for classifying and coarsely deciding airport candidate region; and then 2) refined line segment detection is used to obtain runway and landing field of airport. Finally, airport recognition is achieved by applying the fine runway positioning to the candidate regions. Experimental results show that the proposed approach outperforms the existing algorithms in terms of detection accuracy and processing efficiency.

  1. Diagnosis of Tempromandibular Disorders Using Local Binary Patterns.

    PubMed

    Haghnegahdar, A A; Kolahi, S; Khojastepour, L; Tajeripour, F

    2018-03-01

    Temporomandibular joint disorder (TMD) might be manifested as structural changes in bone through modification, adaptation or direct destruction. We propose to use Local Binary Pattern (LBP) characteristics and histogram-oriented gradients on the recorded images as a diagnostic tool in TMD assessment. CBCT images of 66 patients (132 joints) with TMD and 66 normal cases (132 joints) were collected and 2 coronal cut prepared from each condyle, although images were limited to head of mandibular condyle. In order to extract features of images, first we use LBP and then histogram of oriented gradients. To reduce dimensionality, the linear algebra Singular Value Decomposition (SVD) is applied to the feature vectors matrix of all images. For evaluation, we used K nearest neighbor (K-NN), Support Vector Machine, Naïve Bayesian and Random Forest classifiers. We used Receiver Operating Characteristic (ROC) to evaluate the hypothesis. K nearest neighbor classifier achieves a very good accuracy (0.9242), moreover, it has desirable sensitivity (0.9470) and specificity (0.9015) results, when other classifiers have lower accuracy, sensitivity and specificity. We proposed a fully automatic approach to detect TMD using image processing techniques based on local binary patterns and feature extraction. K-NN has been the best classifier for our experiments in detecting patients from healthy individuals, by 92.42% accuracy, 94.70% sensitivity and 90.15% specificity. The proposed method can help automatically diagnose TMD at its initial stages.

  2. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  3. No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.

    PubMed

    Liu, Tsung-Jung; Liu, Kuan-Hsien

    2018-03-01

    A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.

  4. Inhibitory effect of Piper betle Linn. leaf extract on protein glycation--quantification and characterization of the antiglycation components.

    PubMed

    Bhattacherjee, Abhishek; Chakraborti, Abhay Sankar

    2013-12-01

    Piper betle Linn. is a Pan-Asiatic plant having several beneficial properties. Protein glycation and advanced glycation end products (AGEs) formation are associated with different pathophysiological conditions, including diabetes mellitus. Our study aims to find the effect of methanolic extract of P. betle leaves on in vitro protein glycation in bovine serum albumin (BSA)-glucose model. The extract inhibits glucose-induced glycation, thiol group modification and carbonyl formation in BSA in dose-dependent manner. It inhibits different stages of protein glycation, as demonstrated by using glycation models: hemoglobin-delta-gluconolactone (for early stage, Amadori product formation), BSA-methylglyoxal (for middle stage, formation of oxidative cleavage products) and BSA-glucose (for last stage, formation of AGEs) systems. Several phenolic compounds are isolated from the extract. Considering their relative amounts present in the extract, rutin appears to be the most active antiglycating agent. The extract of P. betle leaf may thus have beneficial effect in preventing protein glycation and associated complications in pathological conditions.

  5. A Semisupervised Support Vector Machines Algorithm for BCI Systems

    PubMed Central

    Qin, Jianzhao; Li, Yuanqing; Sun, Wei

    2007-01-01

    As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141

  6. CARES: Completely Automated Robust Edge Snapper for carotid ultrasound IMT measurement on a multi-institutional database of 300 images: a two stage system combining an intensity-based feature approach with first order absolute moments

    NASA Astrophysics Data System (ADS)

    Molinari, Filippo; Acharya, Rajendra; Zeng, Guang; Suri, Jasjit S.

    2011-03-01

    The carotid intima-media thickness (IMT) is the most used marker for the progression of atherosclerosis and onset of the cardiovascular diseases. Computer-aided measurements improve accuracy, but usually require user interaction. In this paper we characterized a new and completely automated technique for carotid segmentation and IMT measurement based on the merits of two previously developed techniques. We used an integrated approach of intelligent image feature extraction and line fitting for automatically locating the carotid artery in the image frame, followed by wall interfaces extraction based on Gaussian edge operator. We called our system - CARES. We validated the CARES on a multi-institutional database of 300 carotid ultrasound images. IMT measurement bias was 0.032 +/- 0.141 mm, better than other automated techniques and comparable to that of user-driven methodologies. Our novel approach of CARES processed 96% of the images leading to the figure of merit to be 95.7%. CARES ensured complete automation and high accuracy in IMT measurement; hence it could be a suitable clinical tool for processing of large datasets in multicenter studies involving atherosclerosis.pre-

  7. Detection and localization of damage using empirical mode decomposition and multilevel support vector machine

    NASA Astrophysics Data System (ADS)

    Dushyanth, N. D.; Suma, M. N.; Latte, Mrityanjaya V.

    2016-03-01

    Damage in the structure may raise a significant amount of maintenance cost and serious safety problems. Hence detection of the damage at its early stage is of prime importance. The main contribution pursued in this investigation is to propose a generic optimal methodology to improve the accuracy of positioning of the flaw in a structure. This novel approach involves a two-step process. The first step essentially aims at extracting the damage-sensitive features from the received signal, and these extracted features are often termed the damage index or damage indices, serving as an indicator to know whether the damage is present or not. In particular, a multilevel SVM (support vector machine) plays a vital role in the distinction of faulty and healthy structures. Formerly, when a structure is unveiled as a damaged structure, in the subsequent step, the position of the damage is identified using Hilbert-Huang transform. The proposed algorithm has been evaluated in both simulation and experimental tests on a 6061 aluminum plate with dimensions 300 mm × 300 mm × 5 mm which accordingly yield considerable improvement in the accuracy of estimating the position of the flaw.

  8. An Extended Spectral-Spatial Classification Approach for Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Akbari, D.

    2017-11-01

    In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.

  9. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks.

    PubMed

    Huynh, Benjamin Q; Li, Hui; Giger, Maryellen L

    2016-07-01

    Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text

  10. Single-trial laser-evoked potentials feature extraction for prediction of pain perception.

    PubMed

    Huang, Gan; Xiao, Ping; Hu, Li; Hung, Yeung Sam; Zhang, Zhiguo

    2013-01-01

    Pain is a highly subjective experience, and the availability of an objective assessment of pain perception would be of great importance for both basic and clinical applications. The objective of the present study is to develop a novel approach to extract pain-related features from single-trial laser-evoked potentials (LEPs) for classification of pain perception. The single-trial LEP feature extraction approach combines a spatial filtering using common spatial pattern (CSP) and a multiple linear regression (MLR). The CSP method is effective in separating laser-evoked EEG response from ongoing EEG activity, while MLR is capable of automatically estimating the amplitudes and latencies of N2 and P2 from single-trial LEP waveforms. The extracted single-trial LEP features are used in a Naïve Bayes classifier to classify different levels of pain perceived by the subjects. The experimental results show that the proposed single-trial LEP feature extraction approach can effectively extract pain-related LEP features for achieving high classification accuracy.

  11. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    NASA Astrophysics Data System (ADS)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  12. Downstream processing of antibodies: single-stage versus multi-stage aqueous two-phase extraction.

    PubMed

    Rosa, P A J; Azevedo, A M; Ferreira, I F; Sommerfeld, S; Bäcker, W; Aires-Barros, M R

    2009-12-11

    Single-stage and multi-stage strategies have been evaluated and compared for the purification of human antibodies using liquid-liquid extraction in aqueous two-phase systems (ATPSs) composed of polyethylene glycol 3350 (PEG 3350), dextran, and triethylene glycol diglutaric acid (TEG-COOH). The performance of single-stage extraction systems was firstly investigated by studying the effect of pH, TEG-COOH concentration and volume ratio on the partitioning of the different components of a Chinese hamster ovary (CHO) cells supernatant. It was observed that lower pH values and high TEG-COOH concentrations favoured the selective extraction of human immunoglobulin G (IgG) to the PEG-rich phase. Higher recovery yields, purities and percentage of contaminants removal were always achieved in the presence of the ligand, TEG-COOH. The extraction of IgG could be enhanced using higher volume ratios, however with a significant decrease in both purity and percentage of contaminants removal. The best single-stage extraction conditions were achieved for an ATPS containing 1.3% (w/w) TEG-COOH with a volume ratio of 2.2, which allowed the recovery of 96% of IgG in the PEG-rich phase with a final IgG concentration of 0.21mg/mL, a protein purity of 87% and a total purity of 43%. In order to enhance simultaneously both recovery yield and purity, a four stage cross-current operation was simulated and the corresponding liquid-liquid equilibrium (LLE) data determined. A predicted optimised scheme of a counter-current multi-stage aqueous two-phase extraction was hence described. IgG can be purified in the PEG-rich top phase with a final recovery yield of 95%, a final concentration of 1.04mg/mL and a protein purity of 93%, if a PEG/dextran ATPS containing 1.3% (w/w) TEG-COOH, 5 stages and volume ratio of 0.4 are used. Moreover, according to the LLE data of all CHO cells supernatant components, it was possible to observe that most of the cells supernatant contaminants can be removed during this extraction step leading to a final total purity of about 85%.

  13. Alexnet Feature Extraction and Multi-Kernel Learning for Objectoriented Classification

    NASA Astrophysics Data System (ADS)

    Ding, L.; Li, H.; Hu, C.; Zhang, W.; Wang, S.

    2018-04-01

    In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.

  14. Data mining framework for identification of myocardial infarction stages in ultrasound: A hybrid feature extraction paradigm (PART 2).

    PubMed

    Sudarshan, Vidya K; Acharya, U Rajendra; Ng, E Y K; Tan, Ru San; Chou, Siaw Meng; Ghista, Dhanjoo N

    2016-04-01

    Early expansion of infarcted zone after Acute Myocardial Infarction (AMI) has serious short and long-term consequences and contributes to increased mortality. Thus, identification of moderate and severe phases of AMI before leading to other catastrophic post-MI medical condition is most important for aggressive treatment and management. Advanced image processing techniques together with robust classifier using two-dimensional (2D) echocardiograms may aid for automated classification of the extent of infarcted myocardium. Therefore, this paper proposes novel algorithms namely Curvelet Transform (CT) and Local Configuration Pattern (LCP) for an automated detection of normal, moderately infarcted and severely infarcted myocardium using 2D echocardiograms. The methodology extracts the LCP features from CT coefficients of echocardiograms. The obtained features are subjected to Marginal Fisher Analysis (MFA) dimensionality reduction technique followed by fuzzy entropy based ranking method. Different classifiers are used to differentiate ranked features into three classes normal, moderate and severely infarcted based on the extent of damage to myocardium. The developed algorithm has achieved an accuracy of 98.99%, sensitivity of 98.48% and specificity of 100% for Support Vector Machine (SVM) classifier using only six features. Furthermore, we have developed an integrated index called Myocardial Infarction Risk Index (MIRI) to detect the normal, moderately and severely infarcted myocardium using a single number. The proposed system may aid the clinicians in faster identification and quantification of the extent of infarcted myocardium using 2D echocardiogram. This system may also aid in identifying the person at risk of developing heart failure based on the extent of infarcted myocardium. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Performance evaluation for epileptic electroencephalogram (EEG) detection by using Neyman-Pearson criteria and a support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Chun-mei; Zhang, Chong-ming; Zou, Jun-zhong; Zhang, Jian

    2012-02-01

    The diagnosis of several neurological disorders is based on the detection of typical pathological patterns in electroencephalograms (EEGs). This is a time-consuming task requiring significant training and experience. A lot of effort has been devoted to developing automatic detection techniques which might help not only in accelerating this process but also in avoiding the disagreement among readers of the same record. In this work, Neyman-Pearson criteria and a support vector machine (SVM) are applied for detecting an epileptic EEG. Decision making is performed in two stages: feature extraction by computing the wavelet coefficients and the approximate entropy (ApEn) and detection by using Neyman-Pearson criteria and an SVM. Then the detection performance of the proposed method is evaluated. Simulation results demonstrate that the wavelet coefficients and the ApEn are features that represent the EEG signals well. By comparison with Neyman-Pearson criteria, an SVM applied on these features achieved higher detection accuracies.

  16. A deep learning framework for financial time series using stacked autoencoders and long-short term memory

    PubMed Central

    Bao, Wei; Rao, Yulei

    2017-01-01

    The application of deep learning approaches to finance has received a great deal of attention from both investors and researchers. This study presents a novel deep learning framework where wavelet transforms (WT), stacked autoencoders (SAEs) and long-short term memory (LSTM) are combined for stock price forecasting. The SAEs for hierarchically extracted deep features is introduced into stock price forecasting for the first time. The deep learning framework comprises three stages. First, the stock price time series is decomposed by WT to eliminate noise. Second, SAEs is applied to generate deep high-level features for predicting the stock price. Third, high-level denoising features are fed into LSTM to forecast the next day’s closing price. Six market indices and their corresponding index futures are chosen to examine the performance of the proposed model. Results show that the proposed model outperforms other similar models in both predictive accuracy and profitability performance. PMID:28708865

  17. A Hybrid Classification System for Heart Disease Diagnosis Based on the RFRS Method.

    PubMed

    Liu, Xiao; Wang, Xiaoli; Su, Qiang; Zhang, Mo; Zhu, Yanhong; Wang, Qiugen; Wang, Qian

    2017-01-01

    Heart disease is one of the most common diseases in the world. The objective of this study is to aid the diagnosis of heart disease using a hybrid classification system based on the ReliefF and Rough Set (RFRS) method. The proposed system contains two subsystems: the RFRS feature selection system and a classification system with an ensemble classifier. The first system includes three stages: (i) data discretization, (ii) feature extraction using the ReliefF algorithm, and (iii) feature reduction using the heuristic Rough Set reduction algorithm that we developed. In the second system, an ensemble classifier is proposed based on the C4.5 classifier. The Statlog (Heart) dataset, obtained from the UCI database, was used for experiments. A maximum classification accuracy of 92.59% was achieved according to a jackknife cross-validation scheme. The results demonstrate that the performance of the proposed system is superior to the performances of previously reported classification techniques.

  18. Computerized breast cancer analysis system using three stage semi-supervised learning method.

    PubMed

    Sun, Wenqing; Tseng, Tzu-Liang Bill; Zhang, Jianying; Qian, Wei

    2016-10-01

    A large number of labeled medical image data is usually a requirement to train a well-performed computer-aided detection (CAD) system. But the process of data labeling is time consuming, and potential ethical and logistical problems may also present complications. As a result, incorporating unlabeled data into CAD system can be a feasible way to combat these obstacles. In this study we developed a three stage semi-supervised learning (SSL) scheme that combines a small amount of labeled data and larger amount of unlabeled data. The scheme was modified on our existing CAD system using the following three stages: data weighing, feature selection, and newly proposed dividing co-training data labeling algorithm. Global density asymmetry features were incorporated to the feature pool to reduce the false positive rate. Area under the curve (AUC) and accuracy were computed using 10 fold cross validation method to evaluate the performance of our CAD system. The image dataset includes mammograms from 400 women who underwent routine screening examinations, and each pair contains either two cranio-caudal (CC) or two mediolateral-oblique (MLO) view mammograms from the right and the left breasts. From these mammograms 512 regions were extracted and used in this study, and among them 90 regions were treated as labeled while the rest were treated as unlabeled. Using our proposed scheme, the highest AUC observed in our research was 0.841, which included the 90 labeled data and all the unlabeled data. It was 7.4% higher than using labeled data only. With the increasing amount of labeled data, AUC difference between using mixed data and using labeled data only reached its peak when the amount of labeled data was around 60. This study demonstrated that our proposed three stage semi-supervised learning can improve the CAD performance by incorporating unlabeled data. Using unlabeled data is promising in computerized cancer research and may have a significant impact for future CAD system applications. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Design of the ILC RTML Extraction Lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seletskiy, S.; Tenenbaum, P.; Walz, D.

    2011-10-17

    The ILC [1] Damping Ring to the Main Linac beamline (RTML) contains three extraction lines (EL). Each EL can be used both for an emergency abort dumping of the beam and tune-up continual train-by-train extraction. Two of the extraction lines are located downstream of the first and second stages of the RTML bunch compressor, and must accept both compressed and uncompressed beam with energy spreads of 2.5% and 0.15%, respectively. In this paper we report on an optics design that allowed minimizing the length of the extraction lines while offsetting the beam dumps from the main line by the distancemore » required for acceptable radiation levels in the service tunnel. The proposed extraction lines can accommodate beams with different energy spreads while at the same time providing the beam size acceptable for the aluminum dump window. The RTML incorporates three extraction lines, which can be used for either an emergency beam abort or for a train-by-train extraction. The first EL is located downstream of the Damping Ring extraction arc. The other two extraction lines are located downstream of each stage of the two-stage bunch compressor. The first extraction line (EL1) receives 5GeV beam with an 0.15% energy spread. The extraction line located downstream of the first stage of bunch compressor (ELBC1) receives both compressed and uncompressed beam, and therefore must accept beam with both 5 and 4.88GeV energy, and 0.15% and 2.5% energy spread, respectively. The extraction line located after the second stage of the bunch compressor (ELBC2) receives 15GeV beam with either 0.15 or 1.8% energy spread. Each of the three extraction lines is equipped with the 220kW aluminum ball dump, which corresponds to the power of the continuously dumped beam with 5GeV energy, i.e., the beam trains must be delivered to the ELBC2 dump at reduced repetition rate.« less

  20. Computer-aided detection of basal cell carcinoma through blood content analysis in dermoscopy images

    NASA Astrophysics Data System (ADS)

    Kharazmi, Pegah; Kalia, Sunil; Lui, Harvey; Wang, Z. Jane; Lee, Tim K.

    2018-02-01

    Basal cell carcinoma (BCC) is the most common type of skin cancer, which is highly damaging to the skin at its advanced stages and causes huge costs on the healthcare system. However, most types of BCC are easily curable if detected at early stage. Due to limited access to dermatologists and expert physicians, non-invasive computer-aided diagnosis is a viable option for skin cancer screening. A clinical biomarker of cancerous tumors is increased vascularization and excess blood flow. In this paper, we present a computer-aided technique to differentiate cancerous skin tumors from benign lesions based on vascular characteristics of the lesions. Dermoscopy image of the lesion is first decomposed using independent component analysis of the RGB channels to derive melanin and hemoglobin maps. A novel set of clinically inspired features and ratiometric measurements are then extracted from each map to characterize the vascular properties and blood content of the lesion. The feature set is then fed into a random forest classifier. Over a dataset of 664 skin lesions, the proposed method achieved an area under ROC curve of 0.832 in a 10-fold cross validation for differentiating basal cell carcinomas from benign lesions.

  1. An Intelligent Harmonic Synthesis Technique for Air-Gap Eccentricity Fault Diagnosis in Induction Motors

    NASA Astrophysics Data System (ADS)

    Li, De Z.; Wang, Wilson; Ismail, Fathy

    2017-11-01

    Induction motors (IMs) are commonly used in various industrial applications. To improve energy consumption efficiency, a reliable IM health condition monitoring system is very useful to detect IM fault at its earliest stage to prevent operation degradation, and malfunction of IMs. An intelligent harmonic synthesis technique is proposed in this work to conduct incipient air-gap eccentricity fault detection in IMs. The fault harmonic series are synthesized to enhance fault features. Fault related local spectra are processed to derive fault indicators for IM air-gap eccentricity diagnosis. The effectiveness of the proposed harmonic synthesis technique is examined experimentally by IMs with static air-gap eccentricity and dynamic air-gap eccentricity states under different load conditions. Test results show that the developed harmonic synthesis technique can extract fault features effectively for initial IM air-gap eccentricity fault detection.

  2. Classification and pose estimation of objects using nonlinear features

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.

  3. Finger vein recognition based on the hyperinformation feature

    NASA Astrophysics Data System (ADS)

    Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Yang, Lu

    2014-01-01

    The finger vein is a promising biometric pattern for personal identification due to its advantages over other existing biometrics. In finger vein recognition, feature extraction is a critical step, and many feature extraction methods have been proposed to extract the gray, texture, or shape of the finger vein. We treat them as low-level features and present a high-level feature extraction framework. Under this framework, base attribute is first defined to represent the characteristics of a certain subcategory of a subject. Then, for an image, the correlation coefficient is used for constructing the high-level feature, which reflects the correlation between this image and all base attributes. Since the high-level feature can reveal characteristics of more subcategories and contain more discriminative information, we call it hyperinformation feature (HIF). Compared with low-level features, which only represent the characteristics of one subcategory, HIF is more powerful and robust. In order to demonstrate the potential of the proposed framework, we provide a case study to extract HIF. We conduct comprehensive experiments to show the generality of the proposed framework and the efficiency of HIF on our databases, respectively. Experimental results show that HIF significantly outperforms the low-level features.

  4. Accelerating Biomedical Signal Processing Using GPU: A Case Study of Snore Sound Feature Extraction.

    PubMed

    Guo, Jian; Qian, Kun; Zhang, Gongxuan; Xu, Huijie; Schuller, Björn

    2017-12-01

    The advent of 'Big Data' and 'Deep Learning' offers both, a great challenge and a huge opportunity for personalised health-care. In machine learning-based biomedical data analysis, feature extraction is a key step for 'feeding' the subsequent classifiers. With increasing numbers of biomedical data, extracting features from these 'big' data is an intensive and time-consuming task. In this case study, we employ a Graphics Processing Unit (GPU) via Python to extract features from a large corpus of snore sound data. Those features can subsequently be imported into many well-known deep learning training frameworks without any format processing. The snore sound data were collected from several hospitals (20 subjects, with 770-990 MB per subject - in total 17.20 GB). Experimental results show that our GPU-based processing significantly speeds up the feature extraction phase, by up to seven times, as compared to the previous CPU system.

  5. Prognostic value of cancer stem cell marker CD133 expression in pancreatic ductal adenocarcinoma (PDAC): a systematic review and meta-analysis.

    PubMed

    Li, Xiaoping; Zhao, Haojie; Gu, Jianchun; Zheng, Leizhen

    2015-01-01

    CD133 is one of the most commonly used markers of pancreatic cancer stem cells (CSCs), which are characterized by their ability for self-renewal and tumorigenicity. Although the expression of CD133 has been reported to correlate with poor prognosis of PDAC in most literatures, some controversies still exist. In this study, we aimed to investigate the correlation between CD133 expression and prognosis and clinicopathological features in PDAC. A search in the Medline, EMBASE and Chinese CNKI (China National Knowledge Infrastructure) database (up to 1 March 2015) was performed using the following keywords pancreatic cancer, CD133, AC133, prominin-1 etc. Data from eligible studies were extracted and included into meta-analysis using a random effects model. Outcomes included overall survival and various clinicopathological features. We performed a final analysis of 723 patients from 11 evaluable studies for prognostic value and 687 patients from 12 evaluable studies for clinicopathological features. Our study shows that the pooled hazard ratio (HR) of overexpression CD133 for overall survival in PDAC was 0.58 (95% confidence interval (CI): 0.49-0.67) by univariate analysis and 0.73 (95% CI: 0.52-1.03) by multivariate analysis. With respect to clinicopathological features, CD133 overexpression by immunohistochemistry (IHC) method was closely correlated with clinical TNM stage (TNM stage III+IV, OR=0.32, 95% CI: 0.19-0.54), tumor differentiation (poor differentiation, OR=0.56, 95% CI: 0.37-0.83), and lymph node metastasis (N1, 3.15, 95% CI: 1.56-6.36) in patients with PDAC. Our meta-analysis results suggest that CD133 is an efficient prognostic factor in PDAC. Overexpression of CD133 was significantly associated with clinical TNM stage, tumor differentiation and lymph node metastasis.

  6. Human red blood cell recognition enhancement with three-dimensional morphological features obtained by digital holographic imaging

    NASA Astrophysics Data System (ADS)

    Jaferzadeh, Keyvan; Moon, Inkyu

    2016-12-01

    The classification of erythrocytes plays an important role in the field of hematological diagnosis, specifically blood disorders. Since the biconcave shape of red blood cell (RBC) is altered during the different stages of hematological disorders, we believe that the three-dimensional (3-D) morphological features of erythrocyte provide better classification results than conventional two-dimensional (2-D) features. Therefore, we introduce a set of 3-D features related to the morphological and chemical properties of RBC profile and try to evaluate the discrimination power of these features against 2-D features with a neural network classifier. The 3-D features include erythrocyte surface area, volume, average cell thickness, sphericity index, sphericity coefficient and functionality factor, MCH and MCHSD, and two newly introduced features extracted from the ring section of RBC at the single-cell level. In contrast, the 2-D features are RBC projected surface area, perimeter, radius, elongation, and projected surface area to perimeter ratio. All features are obtained from images visualized by off-axis digital holographic microscopy with a numerical reconstruction algorithm, and four categories of biconcave (doughnut shape), flat-disc, stomatocyte, and echinospherocyte RBCs are interested. Our experimental results demonstrate that the 3-D features can be more useful in RBC classification than the 2-D features. Finally, we choose the best feature set of the 2-D and 3-D features by sequential forward feature selection technique, which yields better discrimination results. We believe that the final feature set evaluated with a neural network classification strategy can improve the RBC classification accuracy.

  7. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  8. Acousto-Optic Technology for Topographic Feature Extraction and Image Analysis.

    DTIC Science & Technology

    1981-03-01

    This report contains all findings of the acousto - optic technology study for feature extraction conducted by Deft Laboratories Inc. for the U.S. Army...topographic feature extraction and image analysis using acousto - optic (A-O) technology. A conclusion of this study was that A-O devices are potentially

  9. Prediction of occult invasive disease in ductal carcinoma in situ using computer-extracted mammographic features

    NASA Astrophysics Data System (ADS)

    Shi, Bibo; Grimm, Lars J.; Mazurowski, Maciej A.; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.

    2017-03-01

    Predicting the risk of occult invasive disease in ductal carcinoma in situ (DCIS) is an important task to help address the overdiagnosis and overtreatment problems associated with breast cancer. In this work, we investigated the feasibility of using computer-extracted mammographic features to predict occult invasive disease in patients with biopsy proven DCIS. We proposed a computer-vision algorithm based approach to extract mammographic features from magnification views of full field digital mammography (FFDM) for patients with DCIS. After an expert breast radiologist provided a region of interest (ROI) mask for the DCIS lesion, the proposed approach is able to segment individual microcalcifications (MCs), detect the boundary of the MC cluster (MCC), and extract 113 mammographic features from MCs and MCC within the ROI. In this study, we extracted mammographic features from 99 patients with DCIS (74 pure DCIS; 25 DCIS plus invasive disease). The predictive power of the mammographic features was demonstrated through binary classifications between pure DCIS and DCIS with invasive disease using linear discriminant analysis (LDA). Before classification, the minimum redundancy Maximum Relevance (mRMR) feature selection method was first applied to choose subsets of useful features. The generalization performance was assessed using Leave-One-Out Cross-Validation and Receiver Operating Characteristic (ROC) curve analysis. Using the computer-extracted mammographic features, the proposed model was able to distinguish DCIS with invasive disease from pure DCIS, with an average classification performance of AUC = 0.61 +/- 0.05. Overall, the proposed computer-extracted mammographic features are promising for predicting occult invasive disease in DCIS.

  10. Region of interest extraction based on multiscale visual saliency analysis for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan

    2015-01-01

    Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.

  11. PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less

  12. Biomedical image classification based on a cascade of an SVM with a reject option and subspace analysis.

    PubMed

    Lin, Dongyun; Sun, Lei; Toh, Kar-Ann; Zhang, Jing Bo; Lin, Zhiping

    2018-05-01

    Automated biomedical image classification could confront the challenges of high level noise, image blur, illumination variation and complicated geometric correspondence among various categorical biomedical patterns in practice. To handle these challenges, we propose a cascade method consisting of two stages for biomedical image classification. At stage 1, we propose a confidence score based classification rule with a reject option for a preliminary decision using the support vector machine (SVM). The testing images going through stage 1 are separated into two groups based on their confidence scores. Those testing images with sufficiently high confidence scores are classified at stage 1 while the others with low confidence scores are rejected and fed to stage 2. At stage 2, the rejected images from stage 1 are first processed by a subspace analysis technique called eigenfeature regularization and extraction (ERE), and then classified by another SVM trained in the transformed subspace learned by ERE. At both stages, images are represented based on two types of local features, i.e., SIFT and SURF, respectively. They are encoded using various bag-of-words (BoW) models to handle biomedical patterns with and without geometric correspondence, respectively. Extensive experiments are implemented to evaluate the proposed method on three benchmark real-world biomedical image datasets. The proposed method significantly outperforms several competing state-of-the-art methods in terms of classification accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Engagement Assessment Using EEG Signals

    NASA Technical Reports Server (NTRS)

    Li, Feng; Li, Jiang; McKenzie, Frederic; Zhang, Guangfan; Wang, Wei; Pepe, Aaron; Xu, Roger; Schnell, Thomas; Anderson, Nick; Heitkamp, Dean

    2012-01-01

    In this paper, we present methods to analyze and improve an EEG-based engagement assessment approach, consisting of data preprocessing, feature extraction and engagement state classification. During data preprocessing, spikes, baseline drift and saturation caused by recording devices in EEG signals are identified and eliminated, and a wavelet based method is utilized to remove ocular and muscular artifacts in the EEG recordings. In feature extraction, power spectrum densities with 1 Hz bin are calculated as features, and these features are analyzed using the Fisher score and the one way ANOVA method. In the classification step, a committee classifier is trained based on the extracted features to assess engagement status. Finally, experiment results showed that there exist significant differences in the extracted features among different subjects, and we have implemented a feature normalization procedure to mitigate the differences and significantly improved the engagement assessment performance.

  14. The optional selection of micro-motion feature based on Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Li, Bo; Ren, Hongmei; Xiao, Zhi-he; Sheng, Jing

    2017-11-01

    Micro-motion form of target is multiple, different micro-motion forms are apt to be modulated, which makes it difficult for feature extraction and recognition. Aiming at feature extraction of cone-shaped objects with different micro-motion forms, this paper proposes the best selection method of micro-motion feature based on support vector machine. After the time-frequency distribution of radar echoes, comparing the time-frequency spectrum of objects with different micro-motion forms, features are extracted based on the differences between the instantaneous frequency variations of different micro-motions. According to the methods based on SVM (Support Vector Machine) features are extracted, then the best features are acquired. Finally, the result shows the method proposed in this paper is feasible under the test condition of certain signal-to-noise ratio(SNR).

  15. Personal authentication using hand vein triangulation and knuckle shape.

    PubMed

    Kumar, Ajay; Prathyusha, K Venkata

    2009-09-01

    This paper presents a new approach to authenticate individuals using triangulation of hand vein images and simultaneous extraction of knuckle shape information. The proposed method is fully automated and employs palm dorsal hand vein images acquired from the low-cost, near infrared, contactless imaging. The knuckle tips are used as key points for the image normalization and extraction of region of interest. The matching scores are generated in two parallel stages: (i) hierarchical matching score from the four topologies of triangulation in the binarized vein structures and (ii) from the geometrical features consisting of knuckle point perimeter distances in the acquired images. The weighted score level combination from these two matching scores are used to authenticate the individuals. The achieved experimental results from the proposed system using contactless palm dorsal-hand vein images are promising (equal error rate of 1.14%) and suggest more user friendly alternative for user identification.

  16. A Review of Feature Extraction Software for Microarray Gene Expression Data

    PubMed Central

    Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini

    2014-01-01

    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315

  17. Fatigue crack detection by nonlinear spectral correlation with a wideband input

    NASA Astrophysics Data System (ADS)

    Liu, Peipei; Sohn, Hoon

    2017-04-01

    Due to crack-induced nonlinearity, ultrasonic wave can distort, create accompanying harmonics, multiply waves of different frequencies, and, under resonance conditions, change resonance frequencies as a function of driving amplitude. All these nonlinear ultrasonic features have been widely studied and proved capable of detecting fatigue crack at its very early stage. However, in noisy environment, the nonlinear features might be drown in the noise, therefore it is difficult to extract those features using a conventional spectral density function. In this study, nonlinear spectral correlation is defined as a new nonlinear feature, which considers not only nonlinear modulations in ultrasonic waves but also spectral correlation between the nonlinear modulations. The proposed nonlinear feature is associated with the following two advantages: (1) stationary noise in the ultrasonic waves has little effect on nonlinear spectral correlation; and (2) the contrast of nonlinear spectral correlation between damage and intact conditions can be enhanced simply by using a wideband input. To validate the proposed nonlinear feature, micro fatigue cracks are introduced to aluminum plates by repeated tensile loading, and the experiment is conducted using surface-mounted piezoelectric transducers for ultrasonic wave generation and measurement. The experimental results confirm that the nonlinear spectral correlation can successfully detect fatigue crack with a higher sensitivity than the classical nonlinear coefficient.

  18. Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals

    PubMed Central

    Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu

    2012-01-01

    Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017

  19. Rupture process of the 2009 L'Aquila, central Italy, earthquake, from the separate and joint inversion of Strong Motion, GPS and DInSAR data.

    NASA Astrophysics Data System (ADS)

    Cirella, A.; Piatanesi, A.; Tinti, E.; Chini, M.; Cocco, M.

    2012-04-01

    In this study, we investigate the rupture history of the April 6th 2009 (Mw 6.1) L'Aquila normal faulting earthquake by using a nonlinear inversion of strong motion, GPS and DInSAR data. We use a two-stage non-linear inversion technique. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage the algorithm performs a statistical analysis of the ensemble providing us the best-fitting model, the average model, the associated standard deviation and coefficient of variation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. The application to the 2009 L'Aquila main-shock shows that both the separate and joint inversion solutions reveal a complex rupture process and a heterogeneous slip distribution. Slip is concentrated in two main asperities: a smaller shallow patch of slip located up-dip from the hypocenter and a second deeper and larger asperity located southeastward along strike direction. The key feature of the source process emerging from our inverted models concerns the rupture history, which is characterized by two distinct stages. The first stage begins with rupture initiation and with a modest moment release lasting nearly 0.9 seconds, which is followed by a sharp increase in slip velocity and rupture speed located 2 km up-dip from the nucleation. During this first stage the rupture front propagated up-dip from the hypocenter at relatively high (˜ 4.0 km/s), but still sub-shear, rupture velocity. The second stage starts nearly 2 seconds after nucleation and it is characterized by the along strike rupture propagation. The largest and deeper asperity fails during this stage of the rupture process. The rupture velocity is larger in the up-dip than in the along-strike direction. The up-dip and along-strike rupture propagation are separated in time and associated with a Mode II and a Mode III crack, respectively. Our results show that the 2009 L'Aquila earthquake featured a very complex rupture, with strong spatial and temporal heterogeneities suggesting a strong frictional and/or structural control of the rupture process.

  20. Investigation of different ethylenediamine-N,N'-disuccinic acid-enhanced washing configurations for remediation of a Cu-contaminated soil: process kinetics and efficiency comparison between single-stage and multi-stage configurations.

    PubMed

    Ferraro, Alberto; Fabbricino, Massimiliano; van Hullebusch, Eric D; Esposito, Giovanni

    2017-09-01

    A comparison of Cu extraction yields for three different ethylenediamine-N,N'-disuccinic acid (EDDS)-enhanced washing configurations was performed on a Cu-contaminated soil. Batch experiments were used to simulate a single-stage continuous stirred tank reactor (CSTR) and a multi-stage (side feeding and counter-current) reactor. Single-stage CSTR conditions were simulated for various EDDS:(Cu + Cd + Pb + Co + Ni + Zn) molar ratio (EDDS:M ratio) (from 1 to 30) and liquid to soil (L/S) ratio (from 15 to 45). The highest Cu extraction yield (≃56%) was achieved with EDDS:M = 30. In contrast, a Cu extraction yield decrease was observed with increasing L/S ratio with highest extracted Cu achievement (≃48%) for L/S = 15. Side feeding configuration was tested in four experimental conditions through different fractionation mode of EDDS dose and treatment time at each washing step. Results from the four tests showed all enhanced Cu extraction (maximum values from ≃43 to ≃51%) achieved at lower treatment time and lower EDDS:M molar ratio compared to CSTR configuration with L/S = 25 and EDDS:M = 10. The counter-current washing was carried out through two washing flows achieving a process performance enhancement with 27% increase of extracted Cu compared to single-stage CSTR configuration. Higher Cu extraction percentage (36.8%) was observed in the first washing phase than in the second one (24.7%).

  1. A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.

    PubMed

    Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun

    2017-07-01

    Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time results show that the proposed method has less time complexity after feature selection. The proposed feature extraction method is very effective for getting representatives information from mental states EEG signals in BCI applications and reducing the computational complexity of classifiers by reducing the number of extracted features. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Andrographis paniculata extracts and major constituent diterpenoids inhibit growth of intrahepatic cholangiocarcinoma cells by inducing cell cycle arrest and apoptosis.

    PubMed

    Suriyo, Tawit; Pholphana, Nanthanit; Rangkadilok, Nuchanart; Thiantanawat, Apinya; Watcharasit, Piyajit; Satayavivad, Jutamaad

    2014-05-01

    Andrographis paniculata is an important herbal medicine widely used in several Asian countries for the treatment of various diseases due to its broad range of pharmacological activities. The present study reports that A. paniculata extracts potently inhibit the growth of liver (HepG2 and SK-Hep1) and bile duct (HuCCA-1 and RMCCA-1) cancer cells. A. paniculata extracts with different contents of major diterpenoids, including andrographolide, 14-deoxy-11,12-didehydroandrographolide, neoandrographolide, and 14-deoxyandrographolide, exhibited a different potency of growth inhibition. The ethanolic extract of A. paniculata at the first true leaf stage, which contained a high amount of 14-deoxyandrographolide but a low amount of andrographolide, showed a cytotoxic effect to cancer cells about 4 times higher than the water extract of A. paniculata at the mature leaf stage, which contained a high amount of andrographolide but a low amount of 14-deoxyandrographolide. Andrographolide, not 14-deoxy-11,12-didehydroandrographolide, neoandrographolide, or 14-deoxyandrographolide, possessed potent cytotoxic activity against the growth of liver and bile duct cancer cells. The cytotoxic effect of the water extract of A. paniculata at the mature leaf stage could be explained by the present amount of andrographolide, while the cytotoxic effect of the ethanolic extract of A. paniculata at the first true leaf stage could not. HuCCA-1 cells showed more sensitivity to A. paniculata extracts and andrographolide than RMCCA-1 cells. Furthermore, the ethanolic extract of A. paniculata at the first true leaf stage increased cell cycle arrest at the G0/G1 and G2/M phases, and induced apoptosis in both HuCCA-1 and RMCCA-1 cells. The expressions of cyclin-D1, Bcl-2, and the inactive proenzyme form of caspase-3 were reduced by the ethanolic extract of A. paniculata in the first true leaf stage treatment, while a proapoptotic protein Bax was increased. The cleavage of poly (ADP-ribose) polymerase was also found in the ethanolic extract of A. paniculata in the first true leaf stage treatment. This study suggests that A. paniculata could be a promising herbal plant for the alternative treatment of intrahepatic cholangiocarcinoma. Georg Thieme Verlag KG Stuttgart · New York.

  3. Do bodily expressions compete with facial expressions? Time course of integration of emotional signals from the face and the body.

    PubMed

    Gu, Yuanyuan; Mai, Xiaoqin; Luo, Yue-jia

    2013-01-01

    The decoding of social signals from nonverbal cues plays a vital role in the social interactions of socially gregarious animals such as humans. Because nonverbal emotional signals from the face and body are normally seen together, it is important to investigate the mechanism underlying the integration of emotional signals from these two sources. We conducted a study in which the time course of the integration of facial and bodily expressions was examined via analysis of event-related potentials (ERPs) while the focus of attention was manipulated. Distinctive integrating features were found during multiple stages of processing. In the first stage, threatening information from the body was extracted automatically and rapidly, as evidenced by enhanced P1 amplitudes when the subjects viewed compound face-body images with fearful bodies compared with happy bodies. In the second stage, incongruency between emotional information from the face and the body was detected and captured by N2. Incongruent compound images elicited larger N2s than did congruent compound images. The focus of attention modulated the third stage of integration. When the subjects' attention was focused on the face, images with congruent emotional signals elicited larger P3s than did images with incongruent signals, suggesting more sustained attention and elaboration of congruent emotional information extracted from the face and body. On the other hand, when the subjects' attention was focused on the body, images with fearful bodies elicited larger P3s than did images with happy bodies, indicating more sustained attention and elaboration of threatening information from the body during evaluative processes.

  4. Do Bodily Expressions Compete with Facial Expressions? Time Course of Integration of Emotional Signals from the Face and the Body

    PubMed Central

    Gu, Yuanyuan; Mai, Xiaoqin; Luo, Yue-jia

    2013-01-01

    The decoding of social signals from nonverbal cues plays a vital role in the social interactions of socially gregarious animals such as humans. Because nonverbal emotional signals from the face and body are normally seen together, it is important to investigate the mechanism underlying the integration of emotional signals from these two sources. We conducted a study in which the time course of the integration of facial and bodily expressions was examined via analysis of event-related potentials (ERPs) while the focus of attention was manipulated. Distinctive integrating features were found during multiple stages of processing. In the first stage, threatening information from the body was extracted automatically and rapidly, as evidenced by enhanced P1 amplitudes when the subjects viewed compound face-body images with fearful bodies compared with happy bodies. In the second stage, incongruency between emotional information from the face and the body was detected and captured by N2. Incongruent compound images elicited larger N2s than did congruent compound images. The focus of attention modulated the third stage of integration. When the subjects' attention was focused on the face, images with congruent emotional signals elicited larger P3s than did images with incongruent signals, suggesting more sustained attention and elaboration of congruent emotional information extracted from the face and body. On the other hand, when the subjects' attention was focused on the body, images with fearful bodies elicited larger P3s than did images with happy bodies, indicating more sustained attention and elaboration of threatening information from the body during evaluative processes. PMID:23935825

  5. Detection of Pigment Networks in Dermoscopy Images

    NASA Astrophysics Data System (ADS)

    Eltayef, Khalid; Li, Yongmin; Liu, Xiaohui

    2017-02-01

    One of the most important structures in dermoscopy images is the pigment network, which is also one of the most challenging and fundamental task for dermatologists in early detection of melanoma. This paper presents an automatic system to detect pigment network from dermoscopy images. The design of the proposed algorithm consists of four stages. First, a pre-processing algorithm is carried out in order to remove the noise and improve the quality of the image. Second, a bank of directional filters and morphological connected component analysis are applied to detect the pigment networks. Third, features are extracted from the detected image, which can be used in the subsequent stage. Fourth, the classification process is performed by applying feed-forward neural network, in order to classify the region as either normal or abnormal skin. The method was tested on a dataset of 200 dermoscopy images from Hospital Pedro Hispano (Matosinhos), and better results were produced compared to previous studies.

  6. Hydroxamic acid content and toxicity of rye at selected growth stages.

    PubMed

    Rice, Clifford P; Park, Yong Bong; Adam, Frédérick; Abdul-Baki, Aref A; Teasdale, John R

    2005-08-01

    Rye (Secale cereale L.) is an important cover crop that provides many benefits to cropping systems including weed and pest suppression resulting from allelopathic substances. Hydroxamic acids have been identified as allelopathic compounds in rye. This research was conducted to improve the methodology for quantifying hydroxamic acids and to determine the relationship between hydroxamic acid content and phytotoxicity of extracts of rye root and shoot tissue harvested at selected growth stages. Detection limits for an LC/MS-MS method for analysis of hydroxamic acids from crude aqueous extracts were better than have been reported previously. (2R)-2-beta-D-Glucopyranosyloxy-4-hydroxy-(2H)-1,4-benzoxazin-3(4H)-one (DIBOA-G), 2,4-dihydroxy-(2H)-1,4-benzoxazin-3(4H)-one (DIBOA), benzoxazolin-2(3H)-one (BOA), and the methoxy-substituted form of these compounds, (2R)-2-beta-D-glucopyranosyloxy-4-hydroxy-7-methoxy-(2H)-1,4-benzoxazin-3(4H)-one (DIMBOA glucose), 2,4-hydroxy-7-methoxy-(2H)-1,4-benzoxazin-3(4H)-one (DIMBOA), and 6-methoxy-benzoxazolin-2(3H)-one (MBOA), were all detected in rye tissue. DIBOA and BOA were prevalent in shoot tissue, whereas the methoxy-substituted compounds, DIMBOA glucose and MBOA, were prevalent in root tissue. Total hydroxamic acid concentration in rye tissue generally declined with age. Aqueous crude extracts of rye shoot tissue were more toxic than extracts of root tissue to lettuce (Lactuca sativa L.) and tomato (Lycopersicon esculentum Mill.) root length. Extracts of rye seedlings (Feekes growth stage 2) were most phytotoxic, but there was no pattern to the phytotoxicity of extracts of rye sampled at growth stages 4 to 10.5.4, and no correlation of hydroxamic acid content and phytotoxicity (I50 values). Analysis of dose-response model slope coefficients indicated a lack of parallelism among models for rye extracts from different growth stages, suggesting that phytotoxicity may be attributed to compounds with different modes of action at different stages. Hydroxamic acids may account for the phytoxicity of extracts derived from rye at early growth stages, but other compounds are probably responsible in later growth stages.

  7. User-oriented summary extraction for soccer video based on multimodal analysis

    NASA Astrophysics Data System (ADS)

    Liu, Huayong; Jiang, Shanshan; He, Tingting

    2011-11-01

    An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.

  8. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    NASA Astrophysics Data System (ADS)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  9. Mapping surface disturbance of energy-related infrastructure in southwest Wyoming--An assessment of methods

    USGS Publications Warehouse

    Germaine, Stephen S.; O'Donnell, Michael S.; Aldridge, Cameron L.; Baer, Lori; Fancher, Tammy; McBeth, Jamie; McDougal, Robert R.; Waltermire, Robert; Bowen, Zachary H.; Diffendorfer, James; Garman, Steven; Hanson, Leanne

    2012-01-01

    We evaluated how well three leading information-extraction software programs (eCognition, Feature Analyst, Feature Extraction) and manual hand digitization interpreted information from remotely sensed imagery of a visually complex gas field in Wyoming. Specifically, we compared how each mapped the area of and classified the disturbance features present on each of three remotely sensed images, including 30-meter-resolution Landsat, 10-meter-resolution SPOT (Satellite Pour l'Observation de la Terre), and 0.6-meter resolution pan-sharpened QuickBird scenes. Feature Extraction mapped the spatial area of disturbance features most accurately on the Landsat and QuickBird imagery, while hand digitization was most accurate on the SPOT imagery. Footprint non-overlap error was smallest on the Feature Analyst map of the Landsat imagery, the hand digitization map of the SPOT imagery, and the Feature Extraction map of the QuickBird imagery. When evaluating feature classification success against a set of ground-truthed control points, Feature Analyst, Feature Extraction, and hand digitization classified features with similar success on the QuickBird and SPOT imagery, while eCognition classified features poorly relative to the other methods. All maps derived from Landsat imagery classified disturbance features poorly. Using the hand digitized QuickBird data as a reference and making pixel-by-pixel comparisons, Feature Extraction classified features best overall on the QuickBird imagery, and Feature Analyst classified features best overall on the SPOT and Landsat imagery. Based on the entire suite of tasks we evaluated, Feature Extraction performed best overall on the Landsat and QuickBird imagery, while hand digitization performed best overall on the SPOT imagery, and eCognition performed worst overall on all three images. Error rates for both area measurements and feature classification were prohibitively high on Landsat imagery, while QuickBird was time and cost prohibitive for mapping large spatial extents. The SPOT imagery produced map products that were far more accurate than Landsat and did so at a far lower cost than QuickBird imagery. Consideration of degree of map accuracy required, costs associated with image acquisition, software, operator and computation time, and tradeoffs in the form of spatial extent versus resolution should all be considered when evaluating which combination of imagery and information-extraction method might best serve any given land use mapping project. When resources permit, attaining imagery that supports the highest classification and measurement accuracy possible is recommended.

  10. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  11. Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.

    2018-04-01

    A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.

  12. iFeature: a python package and web server for features extraction and selection from protein and peptide sequences.

    PubMed

    Chen, Zhen; Zhao, Pei; Li, Fuyi; Leier, André; Marquez-Lago, Tatiana T; Wang, Yanan; Webb, Geoffrey I; Smith, A Ian; Daly, Roger J; Chou, Kuo-Chen; Song, Jiangning

    2018-03-08

    Structural and physiochemical descriptors extracted from sequence data have been widely used to represent sequences and predict structural, functional, expression and interaction profiles of proteins and peptides as well as DNAs/RNAs. Here, we present iFeature, a versatile Python-based toolkit for generating various numerical feature representation schemes for both protein and peptide sequences. iFeature is capable of calculating and extracting a comprehensive spectrum of 18 major sequence encoding schemes that encompass 53 different types of feature descriptors. It also allows users to extract specific amino acid properties from the AAindex database. Furthermore, iFeature integrates 12 different types of commonly used feature clustering, selection, and dimensionality reduction algorithms, greatly facilitating training, analysis, and benchmarking of machine-learning models. The functionality of iFeature is made freely available via an online web server and a stand-alone toolkit. http://iFeature.erc.monash.edu/; https://github.com/Superzchen/iFeature/. jiangning.song@monash.edu; kcchou@gordonlifescience.org; roger.daly@monash.edu. Supplementary data are available at Bioinformatics online.

  13. CT texture features of liver parenchyma for predicting development of metastatic disease and overall survival in patients with colorectal cancer.

    PubMed

    Lee, Scott J; Zea, Ryan; Kim, David H; Lubner, Meghan G; Deming, Dustin A; Pickhardt, Perry J

    2018-04-01

    To determine if identifiable hepatic textural features are present at abdominal CT in patients with colorectal cancer (CRC) prior to the development of CT-detectable hepatic metastases. Four filtration-histogram texture features (standard deviation, skewness, entropy and kurtosis) were extracted from the liver parenchyma on portal venous phase CT images at staging and post-treatment surveillance. Surveillance scans corresponded to the last scan prior to the development of CT-detectable CRC liver metastases in 29 patients (median time interval, 6 months), and these were compared with interval-matched surveillance scans in 60 CRC patients who did not develop liver metastases. Predictive models of liver metastasis-free survival and overall survival were built using regularised Cox proportional hazards regression. Texture features did not significantly differ between cases and controls. For Cox models using all features as predictors, all coefficients were shrunk to zero, suggesting no association between any CT texture features and outcomes. Prognostic indices derived from entropy features at surveillance CT incorrectly classified patients into risk groups for future liver metastases (p < 0.001). On surveillance CT scans immediately prior to the development of CRC liver metastases, we found no evidence suggesting that changes in identifiable hepatic texture features were predictive of their development. • No correlation between liver texture features and metastasis-free survival was observed. • Liver texture features incorrectly classified patients into risk groups for liver metastases. • Standardised texture analysis workflows need to be developed to improve research reproducibility.

  14. Robust digital image watermarking using distortion-compensated dither modulation

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Yuan, Xiaochen

    2018-04-01

    In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.

  15. Rapid Extraction of Lexical Tone Phonology in Chinese Characters: A Visual Mismatch Negativity Study

    PubMed Central

    Wang, Xiao-Dong; Liu, A-Ping; Wu, Yin-Yuan; Wang, Peng

    2013-01-01

    Background In alphabetic languages, emerging evidence from behavioral and neuroimaging studies shows the rapid and automatic activation of phonological information in visual word recognition. In the mapping from orthography to phonology, unlike most alphabetic languages in which there is a natural correspondence between the visual and phonological forms, in logographic Chinese, the mapping between visual and phonological forms is rather arbitrary and depends on learning and experience. The issue of whether the phonological information is rapidly and automatically extracted in Chinese characters by the brain has not yet been thoroughly addressed. Methodology/Principal Findings We continuously presented Chinese characters differing in orthography and meaning to adult native Mandarin Chinese speakers to construct a constant varying visual stream. In the stream, most stimuli were homophones of Chinese characters: The phonological features embedded in these visual characters were the same, including consonants, vowels and the lexical tone. Occasionally, the rule of phonology was randomly violated by characters whose phonological features differed in the lexical tone. Conclusions/Significance We showed that the violation of the lexical tone phonology evoked an early, robust visual response, as revealed by whole-head electrical recordings of the visual mismatch negativity (vMMN), indicating the rapid extraction of phonological information embedded in Chinese characters. Source analysis revealed that the vMMN was involved in neural activations of the visual cortex, suggesting that the visual sensory memory is sensitive to phonological information embedded in visual words at an early processing stage. PMID:23437235

  16. Overexpression of the S100A2 protein as a prognostic marker for patients with stage II and III colorectal cancer

    PubMed Central

    MASUDA, TAIKI; ISHIKAWA, TOSHIAKI; MOGUSHI, KAORU; OKAZAKI, SATOSHI; ISHIGURO, MEGUMI; IIDA, SATORU; MIZUSHIMA, HIROSHI; TANAKA, HIROSHI; UETAKE, HIROYUKI; SUGIHARA, KENICHI

    2016-01-01

    We aimed to identify a novel prognostic biomarker related to recurrence in stage II and III colorectal cancer (CRC) patients. Stage II and III CRC tissue mRNA expression was profiled using an Affymetrix Gene Chip, and copy number profiles of 125 patients were generated using an Affymetrix 250K Sty array. Genes showing both upregulated expression and copy number gains in cases involving recurrence were extracted as candidate biomarkers. The protein expression of the candidate gene was assessed using immunohistochemical staining of tissue from 161 patients. The relationship between protein expression and clinicopathological features was also examined. We identified 9 candidate genes related to recurrence of stage II and III CRC, whose mRNA expression was significantly higher in CRC than in normal tissue. Of these proteins, the S100 calcium-binding protein A2 (S100A2) has been observed in several human cancers. S100A2 protein overexpression in CRC cells was associated with significantly worse overall survival and relapse-free survival, indicating that S100A2 is an independent risk factor for stage II and III CRC recurrence. S100A2 overexpression in cancer cells could be a biomarker of poor prognosis in stage II and III CRC recurrence and a target for treatment of this disease. PMID:26783118

  17. A two-stage linear discriminant analysis via QR-decomposition.

    PubMed

    Ye, Jieping; Li, Qi

    2005-06-01

    Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.

  18. Difet: Distributed Feature Extraction Tool for High Spatial Resolution Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Eken, S.; Aydın, E.; Sayar, A.

    2017-11-01

    In this paper, we propose distributed feature extraction tool from high spatial resolution remote sensing images. Tool is based on Apache Hadoop framework and Hadoop Image Processing Interface. Two corner detection (Harris and Shi-Tomasi) algorithms and five feature descriptors (SIFT, SURF, FAST, BRIEF, and ORB) are considered. Robustness of the tool in the task of feature extraction from LandSat-8 imageries are evaluated in terms of horizontal scalability.

  19. Tuner of a Second Harmonic Cavity of the Fermilab Booster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terechkine, I.; Duel, K.; Madrak, R.

    2015-05-17

    Introducing a second harmonic cavity in the accelerating system of the Fermilab Booster promises significant reduc-tion of the particle beam loss during the injection, transi-tion, and extraction stages. To follow the changing energy of the beam during acceleration cycles, the cavity is equipped with a tuner that employs perpendicularly biased AL800 garnet material as the frequency tuning media. The required tuning range of the cavity is from 75.73 MHz at injection to 105.64 MHz at extraction. This large range ne-cessitates the use of a relatively low bias magnetic field at injection, which could lead to high RF loss power densitymore » in the garnet, or a strong bias magnetic field at extraction, which could result in high power consumption in the tuner’s bias magnet. The required 15 Hz repetition rate of the device and high sensitivity of the local RF power loss to the level of the magnetic field added to the challenges of the bias system design. In this report, the main features of a proposed prototype of the second harmonic cavity tuner are presented.« less

  20. Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures

    NASA Astrophysics Data System (ADS)

    Li, Quanbao; Wei, Fajie; Zhou, Shenghan

    2017-05-01

    The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.

  1. Computer ranking of the sequence of appearance of 73 features of the brain and related structures in staged human embryos during the sixth week of development.

    PubMed

    O'Rahilly, R; Müller, F; Hutchins, G M; Moore, G W

    1987-09-01

    The sequence of events in the development of the brain in human embryos, already published for stages 8-15, is here continued for stages 16 and 17. With the aid of a computerized bubble-sort algorithm, 71 individual embryos were ranked in ascending order of the features present. Whereas these numbered 100 in the previous study, the increasing structural complexity gave 27 new features in the two stages now under investigation. The chief characteristics of stage 16 (approximately 37 postovulatory days) are protruding basal nuclei, the caudal olfactory elevation (olfactory tubercle), the tectobulbar tracts, and ascending fibers to the cerebellum. The main features of stage 17 (approximately 41 postovulatory days) are the cortical nucleus of the amygdaloid body, an intermediate layer in the tectum mesencephali, the posterior commissure, and the habenulo-interpeduncular tract. In addition, a typical feature at stage 17 is the crescentic shape of the lens cavity.

  2. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification

    PubMed Central

    Wen, Tingxi; Zhang, Zhongnan

    2017-01-01

    Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789

  3. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  4. SU-F-R-53: CT-Based Radiomics Analysis of Non-Small Cell Lung Cancer Patients Treated with Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huynh, E; Coroller, T; Narayan, V

    Purpose: Stereotactic body radiation therapy (SBRT) is the standard of care for medically inoperable non-small cell lung cancer (NSCLC) patients and has demonstrated excellent local control and survival. However, some patients still develop distant metastases and local recurrence, and therefore, there is a clinical need to identify patients at high-risk of disease recurrence. The aim of the current study is to use a radiomics approach to identify imaging biomarkers, based on tumor phenotype, for clinical outcomes in SBRT patients. Methods: Radiomic features were extracted from free breathing computed tomography (CT) images of 113 Stage I-II NSCLC patients treated with SBRT.more » Their association to and prognostic performance for distant metastasis (DM), locoregional recurrence (LRR) and survival was assessed and compared with conventional features (tumor volume and diameter) and clinical parameters (e.g. performance status, overall stage). The prognostic performance was evaluated using the concordance index (CI). Multivariate model performance was evaluated using cross validation. All p-values were corrected for multiple testing using the false discovery rate. Results: Radiomic features were associated with DM (one feature), LRR (one feature) and survival (four features). Conventional features were only associated with survival and one clinical parameter was associated with LRR and survival. One radiomic feature was significantly prognostic for DM (CI=0.670, p<0.1 from random), while none of the conventional and clinical parameters were significant for DM. The multivariate radiomic model had a higher median CI (0.671) for DM than the conventional (0.618) and clinical models (0.617). Conclusion: Radiomic features have potential to be imaging biomarkers for clinical outcomes that conventional imaging metrics and clinical parameters cannot predict in SBRT patients, such as distant metastasis. Development of a radiomics biomarker that can identify patients at high-risk of recurrence could facilitate personalization of their treatment regimen for an optimized clinical outcome. R.M. had consulting interest with Amgen (ended in 2015).« less

  5. Acoustic⁻Seismic Mixed Feature Extraction Based on Wavelet Transform for Vehicle Classification in Wireless Sensor Networks.

    PubMed

    Zhang, Heng; Pan, Zhongming; Zhang, Wenna

    2018-06-07

    An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.

  6. Extraction of ECG signal with adaptive filter for hearth abnormalities detection

    NASA Astrophysics Data System (ADS)

    Turnip, Mardi; Saragih, Rijois. I. E.; Dharma, Abdi; Esti Kusumandari, Dwi; Turnip, Arjon; Sitanggang, Delima; Aisyah, Siti

    2018-04-01

    This paper demonstrates an adaptive filter method for extraction ofelectrocardiogram (ECG) feature in hearth abnormalities detection. In particular, electrocardiogram (ECG) is a recording of the heart's electrical activity by capturing a tracingof cardiac electrical impulse as it moves from the atrium to the ventricles. The applied algorithm is to evaluate and analyze ECG signals for abnormalities detection based on P, Q, R and S peaks. In the first phase, the real-time ECG data is acquired and pre-processed. In the second phase, the procured ECG signal is subjected to feature extraction process. The extracted features detect abnormal peaks present in the waveform. Thus the normal and abnormal ECG signal could be differentiated based on the features extracted.

  7. Recursive Feature Extraction in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  8. Robust image features: concentric contrasting circles and their image extraction

    NASA Astrophysics Data System (ADS)

    Gatrell, Lance B.; Hoff, William A.; Sklair, Cheryl W.

    1992-03-01

    Many computer vision tasks can be simplified if special image features are placed on the objects to be recognized. A review of special image features that have been used in the past is given and then a new image feature, the concentric contrasting circle, is presented. The concentric contrasting circle image feature has the advantages of being easily manufactured, easily extracted from the image, robust extraction (true targets are found, while few false targets are found), it is a passive feature, and its centroid is completely invariant to the three translational and one rotational degrees of freedom and nearly invariant to the remaining two rotational degrees of freedom. There are several examples of existing parallel implementations which perform most of the extraction work. Extraction robustness was measured by recording the probability of correct detection and the false alarm rate in a set of images of scenes containing mockups of satellites, fluid couplings, and electrical components. A typical application of concentric contrasting circle features is to place them on modeled objects for monocular pose estimation or object identification. This feature is demonstrated on a visually challenging background of a specular but wrinkled surface similar to a multilayered insulation spacecraft thermal blanket.

  9. Preoperative regional cerebral blood flow and postoperative clinical improvement in patients with Parkinson's disease undergoing subthalamic nucleus deep brain stimulation.

    PubMed

    Nagai, Toshiya; Kajita, Yasukazu; Maesawa, Satoshi; Nakatsubo, Daisuke; Yoshida, Kota; Kato, Katsuhiko; Wakabayashi, Toshihiko

    2012-01-01

    Preoperative regional cerebral blood flow (rCBF) was measured in 92 patients with Parkinson's disease (PD) by iodine-123 N-isopropyl-p-iodoamphetamine single-photon emission computed tomography. Quantitative mapping of rCBF was performed using the stereotactic extraction estimation method. The clinical features of the patients were assessed according to the Unified Parkinson Disease Rating Scale (UPDRS). The correlation between rCBF and improvement in the UPDRS score following surgery was examined. rCBF in the fusiform gyrus, superior and inferior parietal gyri, middle occipital gyrus, superior frontal gyrus, and middle temporal gyrus of the Talairach Daemon Level 3 was significantly correlated with UPDRS part II (off stage) and III (on stage) scores (p < 0.05). rCBF in the middle temporal gyrus (p = 0.00147), medial frontal gyrus (p = 0.00713), and cerebellum (p = 0.048) of the Talairach Daemon Level 3 was significantly greater in 47 patients with >60% improvement of UPDRS part III (off stage) score than in 37 patients with 40-60% improvement. The cutoff value of rCBF, which indicated that >40% improvement in the surgical outcome could be expected, was 38.8 ± 6.2 ml/100 g/min in the frontal lobe. This study indicated that rCBF in patients with PD might be related to their clinical features, suggesting that quantitative mapping of rCBF may be useful for predicting surgical outcome.

  10. Analysis of framelets for breast cancer diagnosis.

    PubMed

    Thivya, K S; Sakthivel, P; Venkata Sai, P M

    2016-01-01

    Breast cancer is the second threatening tumor among the women. The effective way of reducing breast cancer is its early detection which helps to improve the diagnosing process. Digital mammography plays a significant role in mammogram screening at earlier stage of breast carcinoma. Even though, it is very difficult to find accurate abnormality in prevalent screening by radiologists. But the possibility of precise breast cancer screening is encouraged by predicting the accurate type of abnormality through Computer Aided Diagnosis (CAD) systems. The two most important indicators of breast malignancy are microcalcifications and masses. In this study, framelet transform, a multiresolutional analysis is investigated for the classification of the above mentioned two indicators. The statistical and co-occurrence features are extracted from the framelet decomposed mammograms with different resolution levels and support vector machine is employed for classification with k-fold cross validation. This system achieves 94.82% and 100% accuracy in normal/abnormal classification (stage I) and benign/malignant classification (stage II) of mass classification system and 98.57% and 100% for microcalcification system when using the MIAS database.

  11. A method for real-time implementation of HOG feature extraction

    NASA Astrophysics Data System (ADS)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  12. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    PubMed

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-09-13

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  13. Using input feature information to improve ultraviolet retrieval in neural networks

    NASA Astrophysics Data System (ADS)

    Sun, Zhibin; Chang, Ni-Bin; Gao, Wei; Chen, Maosi; Zempila, Melina

    2017-09-01

    In neural networks, the training/predicting accuracy and algorithm efficiency can be improved significantly via accurate input feature extraction. In this study, some spatial features of several important factors in retrieving surface ultraviolet (UV) are extracted. An extreme learning machine (ELM) is used to retrieve the surface UV of 2014 in the continental United States, using the extracted features. The results conclude that more input weights can improve the learning capacities of neural networks.

  14. Wire bonding quality monitoring via refining process of electrical signal from ultrasonic generator

    NASA Astrophysics Data System (ADS)

    Feng, Wuwei; Meng, Qingfeng; Xie, Youbo; Fan, Hong

    2011-04-01

    In this paper, a technique for on-line quality detection of ultrasonic wire bonding is developed. The electrical signals from the ultrasonic generator supply, namely, voltage and current, are picked up by a measuring circuit and transformed into digital signals by a data acquisition system. A new feature extraction method is presented to characterize the transient property of the electrical signals and further evaluate the bond quality. The method includes three steps. First, the captured voltage and current are filtered by digital bandpass filter banks to obtain the corresponding subband signals such as fundamental signal, second harmonic, and third harmonic. Second, each subband envelope is obtained using the Hilbert transform for further feature extraction. Third, the subband envelopes are, respectively, separated into three phases, namely, envelope rising, stable, and damping phases, to extract the tiny waveform changes. The different waveform features are extracted from each phase of these subband envelopes. The principal components analysis (PCA) method is used for the feature selection in order to remove the relevant information and reduce the dimension of original feature variables. Using the selected features as inputs, an artificial neural network (ANN) is constructed to identify the complex bond fault pattern. By analyzing experimental data with the proposed feature extraction method and neural network, the results demonstrate the advantages of the proposed feature extraction method and the constructed artificial neural network in detecting and identifying bond quality.

  15. The heterotoxicity of Hordeum vulgare L. extracts in four growth stages on germination and seedlings growth of Avena ludoviciana.

    PubMed

    Kolahi, M; Peivastegan, B; Hadizade, I; Abdali, A

    2008-07-15

    Phytotoxicity of barley extracts (Hordeum vulgare L.) on wild oat (Avena ludoviciana Durieu) was investigated. Water extracts five varieties of barley were bioassayed on germination and seedling growth of wild-oat to test the heterotoxicity of barley on wild-oat, study the dynamics of allelopathic potential over four growth stages and identify the most allelopathic plant part of barley in each stage. Whole barley plants were extracted at growth stage 4 (stems not developed enough), whilst for the following growth stages roots, stems, panicles and leaves were extracted separately. Seedling growth bioassays demonstrated that the wild-oat responded differently to the allelopathic potential of barley. For wild-oat radical growth and coleoptile growth were more depressed than germination, though. The allelopathic potential of barley plant parts was not stable over its life cycle for wild-oat. Leaves and stems were the most phytotoxic barley plant parts for wild-oat in the all stages. Among the varieties Eizeh appeared as the best one showing toxicity to seed germination of wild oat at its stage 4 and 8. Results suggested that the response by wild-oat varied depending on the source of allelochemicals (plant part) and the growth stage of the barley plant and kind of variety. The results leaded to conclude that Eizeh variety of barley was good to grow as it has good check on seed germination of wild oat plants as well as it also retarded the growth of root and shoot length of oat.

  16. Classifying depression patients and normal subjects using machine learning techniques and nonlinear features from EEG signal.

    PubMed

    Hosseinifard, Behshad; Moradi, Mohammad Hassan; Rostami, Reza

    2013-03-01

    Diagnosing depression in the early curable stages is very important and may even save the life of a patient. In this paper, we study nonlinear analysis of EEG signal for discriminating depression patients and normal controls. Forty-five unmedicated depressed patients and 45 normal subjects were participated in this study. Power of four EEG bands and four nonlinear features including detrended fluctuation analysis (DFA), higuchi fractal, correlation dimension and lyapunov exponent were extracted from EEG signal. For discriminating the two groups, k-nearest neighbor, linear discriminant analysis and logistic regression as the classifiers are then used. Highest classification accuracy of 83.3% is obtained by correlation dimension and LR classifier among other nonlinear features. For further improvement, all nonlinear features are combined and applied to classifiers. A classification accuracy of 90% is achieved by all nonlinear features and LR classifier. In all experiments, genetic algorithm is employed to select the most important features. The proposed technique is compared and contrasted with the other reported methods and it is demonstrated that by combining nonlinear features, the performance is enhanced. This study shows that nonlinear analysis of EEG can be a useful method for discriminating depressed patients and normal subjects. It is suggested that this analysis may be a complementary tool to help psychiatrists for diagnosing depressed patients. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Bladder cancer staging in CT urography: effect of stage labels on statistical modeling of a decision support system

    NASA Astrophysics Data System (ADS)

    Gandikota, Dhanuj; Hadjiiski, Lubomir; Cha, Kenny H.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.; Weizer, Alon; Alva, Ajjai; Paramagul, Chintana; Wei, Jun; Zhou, Chuan

    2018-02-01

    In bladder cancer, stage T2 is an important threshold in the decision of administering neoadjuvant chemotherapy. Our long-term goal is to develop a quantitative computerized decision support system (CDSS-S) to aid clinicians in accurate staging. In this study, we examined the effect of stage labels of the training samples on modeling such a system. We used a data set of 84 bladder cancers imaged with CT Urography (CTU). At clinical staging prior to treatment, 43 lesions were staged as below stage T2 and 41 were stage T2 or above. After cystectomy and pathological staging that is considered the gold standard, 10 of the lesions were upstaged to stage T2 or above. After correcting the stage labels, 33 lesions were below stage T2, and 51 were stage T2 or above. For the CDSS-S, the lesions were segmented using our AI-CALS method and radiomic features were extracted. We trained a linear discriminant analysis (LDA) classifier with leave-one-case-out cross validation to distinguish between bladder lesions of stage T2 or above and those below stage T2. The CDSS-S was trained and tested with the corrected post-cystectomy labels, and as a comparison, CDSS-S was also trained with understaged pre-treatment labels and tested on lesions with corrected labels. The test AUC for the CDSS-S trained with corrected labels was 0.89 +/- 0.04. For the CDSS-S trained with understaged pre-treatment labels and tested on the lesions with corrected labels, the test AUC was 0.86 +/- 0.04. The likelihood of stage T2 or above for 9 out of the 10 understaged lesions was correctly increased for the CDSS-S trained with corrected labels. The CDSS-S is sensitive to the accuracy of stage labeling. The CDSS-S trained with correct labels shows promise in prediction of the bladder cancer stage.

  18. A Hybrid Neural Network and Feature Extraction Technique for Target Recognition.

    DTIC Science & Technology

    target features are extracted, the extracted data being evaluated in an artificial neural network to identify a target at a location within the image scene from which the different viewing angles extend.

  19. A scene-analysis approach to remote sensing. [San Francisco, California

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M. (Principal Investigator); Fischler, M. A.; Wolf, H. C.

    1978-01-01

    The author has identified the following significant results. Geometric correspondance between a sensed image and a symbolic map is established in an initial stage of processing by adjusting parameters of a sensed model so that the image features predicted from the map optimally match corresponding features extracted from the sensed image. Information in the map is then used to constrain where to look in an image, what to look for, and how to interpret what is seen. For simple monitoring tasks involving multispectral classification, these constraints significantly reduce computation, simplify interpretation, and improve the utility of the resulting information. Previously intractable tasks requiring spatial and textural analysis may become straightforward in the context established by the map knowledge. The use of map-guided image analysis in monitoring the volume of water in a reservoir, the number of boxcars in a railyard, and the number of ships in a harbor is demonstrated.

  20. Single-channel EEG-based mental fatigue detection based on deep belief network.

    PubMed

    Pinyi Li; Wenhui Jiang; Fei Su

    2016-08-01

    Mental fatigue has a pernicious influence on road and work place safety as well as a negative symptom of many acute and chronic illnesses, since the ability of concentrating, responding and judging quickly decreases during the fatigue or drowsiness stage. Electroencephalography (EEG) has been proven to be a robust physiological indicator of human cognitive state over the last few decades. But most existing EEG-based fatigue detection methods have poor performance in accuracy. This paper proposed a single-channel EEG-based mental fatigue detection method based on Deep Belief Network (DBN). The fused nonliear features from specified sub-bands and dynamic analysis, a total of 21 features are extracted as the input of the DBN to discriminate three classes of mental state including alert, slight fatigue and severe fatigue. Experimental results show the good performance of the proposed model comparing with those state-of-art methods.

  1. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  2. A harmonic linear dynamical system for prominent ECG feature extraction.

    PubMed

    Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc

    2014-01-01

    Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.

  3. Extraction of Rice Heavy Metal Stress Signal Features Based on Long Time Series Leaf Area Index Data Using Ensemble Empirical Mode Decomposition

    PubMed Central

    Liu, Xiangnan; Zhang, Biyao; Liu, Ming; Wu, Ling

    2017-01-01

    The use of remote sensing technology to diagnose heavy metal stress in crops is of great significance for environmental protection and food security. However, in the natural farmland ecosystem, various stressors could have a similar influence on crop growth, therefore making heavy metal stress difficult to identify accurately, so this is still not a well resolved scientific problem and a hot topic in the field of agricultural remote sensing. This study proposes a method that uses Ensemble Empirical Mode Decomposition (EEMD) to obtain the heavy metal stress signal features on a long time scale. The method operates based on the Leaf Area Index (LAI) simulated by the Enhanced World Food Studies (WOFOST) model, assimilated with remotely sensed data. The following results were obtained: (i) the use of EEMD was effective in the extraction of heavy metal stress signals by eliminating the intra-annual and annual components; (ii) LAIdf (The first derivative of the sum of the interannual component and residual) can preferably reflect the stable feature responses to rice heavy metal stress. LAIdf showed stability with an R2 of greater than 0.9 in three growing stages, and the stability is optimal in June. This study combines the spectral characteristics of the stress effect with the time characteristics, and confirms the potential of long-term remotely sensed data for improving the accuracy of crop heavy metal stress identification. PMID:28878147

  4. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    NASA Astrophysics Data System (ADS)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  5. Diagnosis of Tempromandibular Disorders Using Local Binary Patterns

    PubMed Central

    Haghnegahdar, A.A.; Kolahi, S.; Khojastepour, L.; Tajeripour, F.

    2018-01-01

    Background: Temporomandibular joint disorder (TMD) might be manifested as structural changes in bone through modification, adaptation or direct destruction. We propose to use Local Binary Pattern (LBP) characteristics and histogram-oriented gradients on the recorded images as a diagnostic tool in TMD assessment. Material and Methods: CBCT images of 66 patients (132 joints) with TMD and 66 normal cases (132 joints) were collected and 2 coronal cut prepared from each condyle, although images were limited to head of mandibular condyle. In order to extract features of images, first we use LBP and then histogram of oriented gradients. To reduce dimensionality, the linear algebra Singular Value Decomposition (SVD) is applied to the feature vectors matrix of all images. For evaluation, we used K nearest neighbor (K-NN), Support Vector Machine, Naïve Bayesian and Random Forest classifiers. We used Receiver Operating Characteristic (ROC) to evaluate the hypothesis. Results: K nearest neighbor classifier achieves a very good accuracy (0.9242), moreover, it has desirable sensitivity (0.9470) and specificity (0.9015) results, when other classifiers have lower accuracy, sensitivity and specificity. Conclusion: We proposed a fully automatic approach to detect TMD using image processing techniques based on local binary patterns and feature extraction. K-NN has been the best classifier for our experiments in detecting patients from healthy individuals, by 92.42% accuracy, 94.70% sensitivity and 90.15% specificity. The proposed method can help automatically diagnose TMD at its initial stages. PMID:29732343

  6. Learning machines and sleeping brains: Automatic sleep stage classification using decision-tree multi-class support vector machines.

    PubMed

    Lajnef, Tarek; Chaibi, Sahbi; Ruby, Perrine; Aguera, Pierre-Emmanuel; Eichenlaub, Jean-Baptiste; Samet, Mounir; Kachouri, Abdennaceur; Jerbi, Karim

    2015-07-30

    Sleep staging is a critical step in a range of electrophysiological signal processing pipelines used in clinical routine as well as in sleep research. Although the results currently achievable with automatic sleep staging methods are promising, there is need for improvement, especially given the time-consuming and tedious nature of visual sleep scoring. Here we propose a sleep staging framework that consists of a multi-class support vector machine (SVM) classification based on a decision tree approach. The performance of the method was evaluated using polysomnographic data from 15 subjects (electroencephalogram (EEG), electrooculogram (EOG) and electromyogram (EMG) recordings). The decision tree, or dendrogram, was obtained using a hierarchical clustering technique and a wide range of time and frequency-domain features were extracted. Feature selection was carried out using forward sequential selection and classification was evaluated using k-fold cross-validation. The dendrogram-based SVM (DSVM) achieved mean specificity, sensitivity and overall accuracy of 0.92, 0.74 and 0.88 respectively, compared to expert visual scoring. Restricting DSVM classification to data where both experts' scoring was consistent (76.73% of the data) led to a mean specificity, sensitivity and overall accuracy of 0.94, 0.82 and 0.92 respectively. The DSVM framework outperforms classification with more standard multi-class "one-against-all" SVM and linear-discriminant analysis. The promising results of the proposed methodology suggest that it may be a valuable alternative to existing automatic methods and that it could accelerate visual scoring by providing a robust starting hypnogram that can be further fine-tuned by expert inspection. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.

    PubMed

    Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed

    2018-05-15

    Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.

  8. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  9. Automated recognition system for power quality disturbances

    NASA Astrophysics Data System (ADS)

    Abdelgalil, Tarek

    The application of deregulation policies in electric power systems has resulted in the necessity to quantify the quality of electric power. This fact highlights the need for a new monitoring strategy which is capable of tracking, detecting, classifying power quality disturbances, and then identifying the source of the disturbance. The objective of this work is to design an efficient and reliable power quality monitoring strategy that uses the advances in signal processing and pattern recognition to overcome the deficiencies that exist in power quality monitoring devices. The purposed monitoring strategy has two stages. The first stage is to detect, track, and classify any power quality violation by the use of on-line measurements. In the second stage, the source of the classified power quality disturbance must be identified. In the first stage, an adaptive linear combiner is used to detect power quality disturbances. Then, the Teager Energy Operator and Hilbert Transform are utilized for power quality event tracking. After the Fourier, Wavelet, and Walsh Transforms are employed for the feature extraction, two approaches are then exploited to classify the different power quality disturbances. The first approach depends on comparing the disturbance to be classified with a stored set of signatures for different power quality disturbances. The comparison is developed by using Hidden Markov Models and Dynamic Time Warping. The second approach depends on employing an inductive inference to generate the classification rules directly from the data. In the second stage of the new monitoring strategy, only the problem of identifying the location of the switched capacitor which initiates the transients is investigated. The Total Least Square-Estimation of Signal Parameters via Rotational Invariance Technique is adopted to estimate the amplitudes and frequencies of the various modes contained in the voltage signal measured at the facility entrance. After extracting the amplitudes and frequencies, an Artificial Neural Network is employed to identify the switched capacitor by using amplitudes and frequencies extracted from the transient signal. The new algorithms for detecting, tracking, and classifying power quality disturbances demonstrate the potential for further development of a fully automated recognition system for the assessment of power quality. This is possible because the implementation of the proposed algorithms for the power quality monitoring device becomes a straight forward process by modifying the device software.

  10. Variation in chemical composition and allelopathic potential of mixoploid Trigonella foenum-graecum L. with developmental stages.

    PubMed

    Omezzine, Faten; Bouaziz, Mohamed; Simmonds, Monique S J; Haouala, Rabiaa

    2014-04-01

    This study was conducted to evaluate the influence of developmental stages (vegetative, flowering and fruiting) of mixoploid fenugreek aerial parts on their chemical composition and allelopathic potential, assessed on lettuce germination and seedling growth. Aqueous and organic extracts significantly delayed germination, reduced its rate and affected seedling growth. Ethyl acetate and methanol extracts of aerial parts harvested at vegetative stage were the most toxic for lettuce germination and seedling growth, respectively. LC-MS/MS analysis of T. foenum-graecum aerial parts methanolic extract showed nine different flavonol glycosides (quercetin and kaempferol glucosides). Chemical composition of aerial parts differed with the developmental stage; indeed, at the vegetative and fruiting stages, analysis revealed the presence of 9 compounds as compared to only 6 compounds at the flowering stage. Thus, it is necessary to follow the qualitative changes of allelochemicals production at different developmental stages to identify the most productive one. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  12. Behavioral model of visual perception and recognition

    NASA Astrophysics Data System (ADS)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.

  13. Non-negative matrix factorization in texture feature for classification of dementia with MRI data

    NASA Astrophysics Data System (ADS)

    Sarwinda, D.; Bustamam, A.; Ardaneswari, G.

    2017-07-01

    This paper investigates applications of non-negative matrix factorization as feature selection method to select the features from gray level co-occurrence matrix. The proposed approach is used to classify dementia using MRI data. In this study, texture analysis using gray level co-occurrence matrix is done to feature extraction. In the feature extraction process of MRI data, we found seven features from gray level co-occurrence matrix. Non-negative matrix factorization selected three features that influence of all features produced by feature extractions. A Naïve Bayes classifier is adapted to classify dementia, i.e. Alzheimer's disease, Mild Cognitive Impairment (MCI) and normal control. The experimental results show that non-negative factorization as feature selection method able to achieve an accuracy of 96.4% for classification of Alzheimer's and normal control. The proposed method also compared with other features selection methods i.e. Principal Component Analysis (PCA).

  14. Assessment of dual life stage antiplasmodial activity of british seaweeds.

    PubMed

    Spavieri, Jasmine; Allmendinger, Andrea; Kaiser, Marcel; Itoe, Maurice Ayamba; Blunden, Gerald; Mota, Maria M; Tasdemir, Deniz

    2013-10-22

    Terrestrial plants have proven to be a prolific producer of clinically effective antimalarial drugs, but the antimalarial potential of seaweeds has been little explored. The main aim of this study was to assess the in vitro chemotherapeutical and prophylactic potential of the extracts of twenty-three seaweeds collected from the south coast of England against blood stage (BS) and liver stage (LS) Plasmodium parasites. The majority (14) of the extracts were active against BS of P. falciparum, with brown seaweeds Cystoseira tamariscifolia, C. baccata and the green seaweed Ulva lactuca being the most active (IC(50)s around 3 μg/mL). The extracts generally had high selectivity indices (>10). Eight seaweed extracts inhibited the growth of LS parasites of P. berghei without any obvious effect on the viability of the human hepatoma (Huh7) cells, and the highest potential was exerted by U. lactuca and red seaweeds Ceramium virgatum and Halopitys incurvus (IC50 values 14.9 to 28.8 μg/mL). The LS-active extracts inhibited one or more key enzymes of the malarial type-II fatty acid biosynthesis (FAS-II) pathway, a drug target specific for LS. Except for the red seaweed Halopitys incurvus, all LS-active extracts showed dual activity versus both malarial intracellular stage parasites. This is the first report of LS antiplasmodial activity and dual stage inhibitory potential of seaweeds.

  15. Assessment of Dual Life Stage Antiplasmodial Activity of British Seaweeds

    PubMed Central

    Spavieri, Jasmine; Allmendinger, Andrea; Kaiser, Marcel; Itoe, Maurice Ayamba; Blunden, Gerald; Mota, Maria M.; Tasdemir, Deniz

    2013-01-01

    Terrestrial plants have proven to be a prolific producer of clinically effective antimalarial drugs, but the antimalarial potential of seaweeds has been little explored. The main aim of this study was to assess the in vitro chemotherapeutical and prophylactic potential of the extracts of twenty-three seaweeds collected from the south coast of England against blood stage (BS) and liver stage (LS) Plasmodium parasites. The majority (14) of the extracts were active against BS of P. falciparum, with brown seaweeds Cystoseira tamariscifolia, C. baccata and the green seaweed Ulva lactuca being the most active (IC50s around 3 μg/mL). The extracts generally had high selectivity indices (>10). Eight seaweed extracts inhibited the growth of LS parasites of P. berghei without any obvious effect on the viability of the human hepatoma (Huh7) cells, and the highest potential was exerted by U. lactuca and red seaweeds Ceramium virgatum and Halopitys incurvus (IC50 values 14.9 to 28.8 μg/mL). The LS-active extracts inhibited one or more key enzymes of the malarial type-II fatty acid biosynthesis (FAS-II) pathway, a drug target specific for LS. Except for the red seaweed Halopitys incurvus, all LS-active extracts showed dual activity versus both malarial intracellular stage parasites. This is the first report of LS antiplasmodial activity and dual stage inhibitory potential of seaweeds. PMID:24152562

  16. a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image

    NASA Astrophysics Data System (ADS)

    Li, L.; Yang, H.; Chen, Q.; Liu, X.

    2018-04-01

    Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.

  17. A Study of the Effect of the Front-End Styling of Sport Utility Vehicles on Pedestrian Head Injuries

    PubMed Central

    Qin, Qin; Chen, Zheng; Bai, Zhonghao; Cao, Libo

    2018-01-01

    Background The number of sport utility vehicles (SUVs) on China market is continuously increasing. It is necessary to investigate the relationships between the front-end styling features of SUVs and head injuries at the styling design stage for improving the pedestrian protection performance and product development efficiency. Methods Styling feature parameters were extracted from the SUV side contour line. And simplified finite element models were established based on the 78 SUV side contour lines. Pedestrian headform impact simulations were performed and validated. The head injury criterion of 15 ms (HIC15) at four wrap-around distances was obtained. A multiple linear regression analysis method was employed to describe the relationships between the styling feature parameters and the HIC15 at each impact point. Results The relationship between the selected styling features and the HIC15 showed reasonable correlations, and the regression models and the selected independent variables showed statistical significance. Conclusions The regression equations obtained by multiple linear regression can be used to assess the performance of SUV styling in protecting pedestrians' heads and provide styling designers with technical guidance regarding their artistic creations.

  18. A comparison of a new centrifuge sugar flotation technique with the agar method for the extraction of immature Culicoides (Diptera: Ceratopogonidae) life stages from salt marsh soils.

    USDA-ARS?s Scientific Manuscript database

    Two sampling techniques, agar extraction (AE) and centrifuge sugar flotation extraction (CSFE) were compared to determine their relative efficacy to recover immature stages of Culicoides spp from salt marsh substrates. Three types of samples (seeded with known numbers of larvae, homogenized field s...

  19. Novel Features for Brain-Computer Interfaces

    PubMed Central

    Woon, W. L.; Cichocki, A.

    2007-01-01

    While conventional approaches of BCI feature extraction are based on the power spectrum, we have tried using nonlinear features for classifying BCI data. In this paper, we report our test results and findings, which indicate that the proposed method is a potentially useful addition to current feature extraction techniques. PMID:18364991

  20. SU-F-R-45: The Prognostic Value of Radiotherapy Based On the Changes of Texture Features Between Pre-Treatment and Post-Treatment FDG PET Image for NSCLC Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, C; Yin, Y

    Purpose: The purpose of this research is investigating which texture features extracted from FDG-PET images by gray-level co-occurrence matrix(GLCM) have a higher prognostic value than the other texture features. Methods: 21 non-small cell lung cancer(NSCLC) patients were approved in the study. Patients underwent 18F-FDG PET/CT scans with both pre-treatment and post-treatment. Firstly, the tumors were extracted by our house developed software. Secondly, the clinical features including the maximum SUV and tumor volume were extracted by MIM vista software, and texture features including angular second moment, contrast, inverse different moment, entropy and correlation were extracted using MATLAB.The differences can be calculatedmore » by using post-treatment features to subtract pre-treatment features. Finally, the SPSS software was used to get the Pearson correlation coefficients and Spearman rank correlation coefficients between the change ratios of texture features and change ratios of clinical features. Results: The Pearson and Spearman rank correlation coefficient between contrast and SUV maximum is 0.785 and 0.709. The P and S value between inverse difference moment and tumor volume is 0.953 and 0.942. Conclusion: This preliminary study showed that the relationships between different texture features and the same clinical feature are different. Finding the prognostic value of contrast and inverse difference moment were higher than the other three textures extracted by GLCM.« less

  1. [Image Feature Extraction and Discriminant Analysis of Xinjiang Uygur Medicine Based on Color Histogram].

    PubMed

    Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat

    2015-06-01

    Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine.

  2. Research of facial feature extraction based on MMC

    NASA Astrophysics Data System (ADS)

    Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun

    2017-07-01

    Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.

  3. White blood cells identification system based on convolutional deep neural learning networks.

    PubMed

    Shahin, A I; Guo, Yanhui; Amin, K M; Sharawi, Amr A

    2017-11-16

    White blood cells (WBCs) differential counting yields valued information about human health and disease. The current developed automated cell morphology equipments perform differential count which is based on blood smear image analysis. Previous identification systems for WBCs consist of successive dependent stages; pre-processing, segmentation, feature extraction, feature selection, and classification. There is a real need to employ deep learning methodologies so that the performance of previous WBCs identification systems can be increased. Classifying small limited datasets through deep learning systems is a major challenge and should be investigated. In this paper, we propose a novel identification system for WBCs based on deep convolutional neural networks. Two methodologies based on transfer learning are followed: transfer learning based on deep activation features and fine-tuning of existed deep networks. Deep acrivation featues are extracted from several pre-trained networks and employed in a traditional identification system. Moreover, a novel end-to-end convolutional deep architecture called "WBCsNet" is proposed and built from scratch. Finally, a limited balanced WBCs dataset classification is performed through the WBCsNet as a pre-trained network. During our experiments, three different public WBCs datasets (2551 images) have been used which contain 5 healthy WBCs types. The overall system accuracy achieved by the proposed WBCsNet is (96.1%) which is more than different transfer learning approaches or even the previous traditional identification system. We also present features visualization for the WBCsNet activation which reflects higher response than the pre-trained activated one. a novel WBCs identification system based on deep learning theory is proposed and a high performance WBCsNet can be employed as a pre-trained network. Copyright © 2017. Published by Elsevier B.V.

  4. Zone-size nonuniformity of 18F-FDG PET regional textural features predicts survival in patients with oropharyngeal cancer.

    PubMed

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Lee, Li-yu; Chang, Joseph Tung-Chieh; Tsan, Din-Li; Ng, Shu-Hang; Wang, Hung-Ming; Liao, Chun-Ta; Yang, Lan-Yan; Hsu, Ching-Han; Yen, Tzu-Chen

    2015-03-01

    The question as to whether the regional textural features extracted from PET images predict prognosis in oropharyngeal squamous cell carcinoma (OPSCC) remains open. In this study, we investigated the prognostic impact of regional heterogeneity in patients with T3/T4 OPSCC. We retrospectively reviewed the records of 88 patients with T3 or T4 OPSCC who had completed primary therapy. Progression-free survival (PFS) and disease-specific survival (DSS) were the main outcome measures. In an exploratory analysis, a standardized uptake value of 2.5 (SUV 2.5) was taken as the cut-off value for the detection of tumour boundaries. A fixed threshold at 42 % of the maximum SUV (SUVmax 42 %) and an adaptive threshold method were then used for validation. Regional textural features were extracted from pretreatment (18)F-FDG PET/CT images using the grey-level run length encoding method and grey-level size zone matrix. The prognostic significance of PET textural features was examined using receiver operating characteristic (ROC) curves and Cox regression analysis. Zone-size nonuniformity (ZSNU) was identified as an independent predictor of PFS and DSS. Its prognostic impact was confirmed using both the SUVmax 42 % and the adaptive threshold segmentation methods. Based on (1) total lesion glycolysis, (2) uniformity (a local scale texture parameter), and (3) ZSNU, we devised a prognostic stratification system that allowed the identification of four distinct risk groups. The model combining the three prognostic parameters showed a higher predictive value than each variable alone. ZSNU is an independent predictor of outcome in patients with advanced T-stage OPSCC, and may improve their prognostic stratification.

  5. Pattern recognition applied to seismic signals of Llaima volcano (Chile): An evaluation of station-dependent classifiers

    NASA Astrophysics Data System (ADS)

    Curilem, Millaray; Huenupan, Fernando; Beltrán, Daniel; San Martin, Cesar; Fuentealba, Gustavo; Franco, Luis; Cardona, Carlos; Acuña, Gonzalo; Chacón, Max; Khan, M. Salman; Becerra Yoma, Nestor

    2016-04-01

    Automatic pattern recognition applied to seismic signals from volcanoes may assist seismic monitoring by reducing the workload of analysts, allowing them to focus on more challenging activities, such as producing reports, implementing models, and understanding volcanic behaviour. In a previous work, we proposed a structure for automatic classification of seismic events in Llaima volcano, one of the most active volcanoes in the Southern Andes, located in the Araucanía Region of Chile. A database of events taken from three monitoring stations on the volcano was used to create a classification structure, independent of which station provided the signal. The database included three types of volcanic events: tremor, long period, and volcano-tectonic and a contrast group which contains other types of seismic signals. In the present work, we maintain the same classification scheme, but we consider separately the stations information in order to assess whether the complementary information provided by different stations improves the performance of the classifier in recognising seismic patterns. This paper proposes two strategies for combining the information from the stations: i) combining the features extracted from the signals from each station and ii) combining the classifiers of each station. In the first case, the features extracted from the signals from each station are combined forming the input for a single classification structure. In the second, a decision stage combines the results of the classifiers for each station to give a unique output. The results confirm that the station-dependent strategies that combine the features and the classifiers from several stations improves the classification performance, and that the combination of the features provides the best performance. The results show an average improvement of 9% in the classification accuracy when compared with the station-independent method.

  6. Deep neural networks for automatic detection of osteoporotic vertebral fractures on CT scans.

    PubMed

    Tomita, Naofumi; Cheung, Yvonne Y; Hassanpour, Saeed

    2018-07-01

    Osteoporotic vertebral fractures (OVFs) are prevalent in older adults and are associated with substantial personal suffering and socio-economic burden. Early diagnosis and treatment of OVFs are critical to prevent further fractures and morbidity. However, OVFs are often under-diagnosed and under-reported in computed tomography (CT) exams as they can be asymptomatic at an early stage. In this paper, we present and evaluate an automatic system that can detect incidental OVFs in chest, abdomen, and pelvis CT examinations at the level of practicing radiologists. Our OVF detection system leverages a deep convolutional neural network (CNN) to extract radiological features from each slice in a CT scan. These extracted features are processed through a feature aggregation module to make the final diagnosis for the full CT scan. In this work, we explored different methods for this feature aggregation, including the use of a long short-term memory (LSTM) network. We trained and evaluated our system on 1432 CT scans, comprised of 10,546 two-dimensional (2D) images in sagittal view. Our system achieved an accuracy of 89.2% and an F1 score of 90.8% based on our evaluation on a held-out test set of 129 CT scans, which were established as reference standards through standard semiquantitative and quantitative methods. The results of our system matched the performance of practicing radiologists on this test set in real-world clinical circumstances. We expect the proposed system will assist and improve OVF diagnosis in clinical settings by pre-screening routine CT examinations and flagging suspicious cases prior to review by radiologists. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Automatic crack detection method for loaded coal in vibration failure process

    PubMed Central

    Li, Chengwu

    2017-01-01

    In the coal mining process, the destabilization of loaded coal mass is a prerequisite for coal and rock dynamic disaster, and surface cracks of the coal and rock mass are important indicators, reflecting the current state of the coal body. The detection of surface cracks in the coal body plays an important role in coal mine safety monitoring. In this paper, a method for detecting the surface cracks of loaded coal by a vibration failure process is proposed based on the characteristics of the surface cracks of coal and support vector machine (SVM). A large number of cracked images are obtained by establishing a vibration-induced failure test system and industrial camera. Histogram equalization and a hysteresis threshold algorithm were used to reduce the noise and emphasize the crack; then, 600 images and regions, including cracks and non-cracks, were manually labelled. In the crack feature extraction stage, eight features of the cracks are extracted to distinguish cracks from other objects. Finally, a crack identification model with an accuracy over 95% was trained by inputting the labelled sample images into the SVM classifier. The experimental results show that the proposed algorithm has a higher accuracy than the conventional algorithm and can effectively identify cracks on the surface of the coal and rock mass automatically. PMID:28973032

  8. Automatic crack detection method for loaded coal in vibration failure process.

    PubMed

    Li, Chengwu; Ai, Dihao

    2017-01-01

    In the coal mining process, the destabilization of loaded coal mass is a prerequisite for coal and rock dynamic disaster, and surface cracks of the coal and rock mass are important indicators, reflecting the current state of the coal body. The detection of surface cracks in the coal body plays an important role in coal mine safety monitoring. In this paper, a method for detecting the surface cracks of loaded coal by a vibration failure process is proposed based on the characteristics of the surface cracks of coal and support vector machine (SVM). A large number of cracked images are obtained by establishing a vibration-induced failure test system and industrial camera. Histogram equalization and a hysteresis threshold algorithm were used to reduce the noise and emphasize the crack; then, 600 images and regions, including cracks and non-cracks, were manually labelled. In the crack feature extraction stage, eight features of the cracks are extracted to distinguish cracks from other objects. Finally, a crack identification model with an accuracy over 95% was trained by inputting the labelled sample images into the SVM classifier. The experimental results show that the proposed algorithm has a higher accuracy than the conventional algorithm and can effectively identify cracks on the surface of the coal and rock mass automatically.

  9. Application of outlier analysis for baseline-free damage diagnosis

    NASA Astrophysics Data System (ADS)

    Kim, Seung Dae; In, Chi Won; Cronin, Kelly E.; Sohn, Hoon; Harries, Kent

    2006-03-01

    As carbon fiber-reinforced polymer (CFRP) laminates have been widely accepted as valuable materials for retrofitting civil infrastructure systems, an appropriate assessment of bonding conditions between host structures and CFRP laminates becomes a critical issue to guarantee the performance of CFRP strengthened structures. This study attempts to develop a continuous performance monitoring system for CFRP strengthened structures by autonomously inspecting the bonding conditions between the CFRP layers and the host structure. The uniqueness of this study is to develop a new concept and theoretical framework of nondestructive testing (NDT), in which debonding is detected "without using past baseline data." The proposed baseline-free damage diagnosis is achieved in two stages. In the first step, features sensitive to debonding of the CFPR layers but insensitive to loading conditions are extracted based on a concept referred to as a time reversal process. This time reversal process allows extracting damage-sensitive features without direct comparison with past baseline data. Then, a statistical damage classifier will be developed in the second step to make a decision regarding the bonding condition of the CFRP layers. The threshold necessary for decision making will be adaptively determined without predetermined threshold values. Monotonic and fatigue load tests of full-scale CFRP strengthened RC beams are conducted to demonstrate the potential of the proposed reference-free debonding monitoring system.

  10. Joint volumetric extraction and enhancement of vasculature from low-SNR 3-D fluorescence microscopy images.

    PubMed

    Almasi, Sepideh; Ben-Zvi, Ayal; Lacoste, Baptiste; Gu, Chenghua; Miller, Eric L; Xu, Xiaoyin

    2017-03-01

    To simultaneously overcome the challenges imposed by the nature of optical imaging characterized by a range of artifacts including space-varying signal to noise ratio (SNR), scattered light, and non-uniform illumination, we developed a novel method that segments the 3-D vasculature directly from original fluorescence microscopy images eliminating the need for employing pre- and post-processing steps such as noise removal and segmentation refinement as used with the majority of segmentation techniques. Our method comprises two initialization and constrained recovery and enhancement stages. The initialization approach is fully automated using features derived from bi-scale statistical measures and produces seed points robust to non-uniform illumination, low SNR, and local structural variations. This algorithm achieves the goal of segmentation via design of an iterative approach that extracts the structure through voting of feature vectors formed by distance, local intensity gradient, and median measures. Qualitative and quantitative analysis of the experimental results obtained from synthetic and real data prove the effcacy of this method in comparison to the state-of-the-art enhancing-segmenting methods. The algorithmic simplicity, freedom from having a priori probabilistic information about the noise, and structural definition gives this algorithm a wide potential range of applications where i.e. structural complexity significantly complicates the segmentation problem.

  11. Computer vision and machine learning for robust phenotyping in genome-wide studies

    PubMed Central

    Zhang, Jiaoping; Naik, Hsiang Sing; Assefa, Teshale; Sarkar, Soumik; Reddy, R. V. Chowda; Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh K.

    2017-01-01

    Traditional evaluation of crop biotic and abiotic stresses are time-consuming and labor-intensive limiting the ability to dissect the genetic basis of quantitative traits. A machine learning (ML)-enabled image-phenotyping pipeline for the genetic studies of abiotic stress iron deficiency chlorosis (IDC) of soybean is reported. IDC classification and severity for an association panel of 461 diverse plant-introduction accessions was evaluated using an end-to-end phenotyping workflow. The workflow consisted of a multi-stage procedure including: (1) optimized protocols for consistent image capture across plant canopies, (2) canopy identification and registration from cluttered backgrounds, (3) extraction of domain expert informed features from the processed images to accurately represent IDC expression, and (4) supervised ML-based classifiers that linked the automatically extracted features with expert-rating equivalent IDC scores. ML-generated phenotypic data were subsequently utilized for the genome-wide association study and genomic prediction. The results illustrate the reliability and advantage of ML-enabled image-phenotyping pipeline by identifying previously reported locus and a novel locus harboring a gene homolog involved in iron acquisition. This study demonstrates a promising path for integrating the phenotyping pipeline into genomic prediction, and provides a systematic framework enabling robust and quicker phenotyping through ground-based systems. PMID:28272456

  12. Capability of geometric features to classify ships in SAR imagery

    NASA Astrophysics Data System (ADS)

    Lang, Haitao; Wu, Siwen; Lai, Quan; Ma, Li

    2016-10-01

    Ship classification in synthetic aperture radar (SAR) imagery has become a new hotspot in remote sensing community for its valuable potential in many maritime applications. Several kinds of ship features, such as geometric features, polarimetric features, and scattering features have been widely applied on ship classification tasks. Compared with polarimetric features and scattering features, which are subject to SAR parameters (e.g., sensor type, incidence angle, polarization, etc.) and environment factors (e.g., sea state, wind, wave, current, etc.), geometric features are relatively independent of SAR and environment factors, and easy to be extracted stably from SAR imagery. In this paper, the capability of geometric features to classify ships in SAR imagery with various resolution has been investigated. Firstly, the relationship between the geometric feature extraction accuracy and the SAR imagery resolution is analyzed. It shows that the minimum bounding rectangle (MBR) of ship can be extracted exactly in terms of absolute precision by the proposed automatic ship-sea segmentation method. Next, six simple but effective geometric features are extracted to build a ship representation for the subsequent classification task. These six geometric features are composed of length (f1), width (f2), area (f3), perimeter (f4), elongatedness (f5) and compactness (f6). Among them, two basic features, length (f1) and width (f2), are directly extracted based on the MBR of ship, the other four are derived from those two basic features. The capability of the utilized geometric features to classify ships are validated on two data set with different image resolutions. The results show that the performance of ship classification solely by geometric features is close to that obtained by the state-of-the-art methods, which obtained by a combination of multiple kinds of features, including scattering features and geometric features after a complex feature selection process.

  13. Computer ranking of the sequence of appearance of 100 features of the brain and related structures in staged human embryos during the first 5 weeks of development.

    PubMed

    O'Rahilly, R; Müller, F; Hutchins, G M; Moore, G W

    1984-11-01

    The sequence of events in the development of the brain in staged human embryos was investigated in much greater detail than in previous studies by listing 100 features in 165 embryos of the first 5 weeks. Using a computerized bubble-sort algorithm, individual embryos were ranked in ascending order of the features present. This procedure made feasible an appreciation of the slight variation found in the developmental features. The vast majority of features appeared during either one or two stages (about 2 or 3 days). In general, the soundness of the Carnegie system of embryonic staging was amply confirmed. The rhombencephalon was found to show increasing complexity around stage 13, and the postoptic portion of the diencephalon underwent considerable differentiation by stage 15. The need for similar investigations of other systems of the body is emphasized, and the importance of such studies in assessing the timing of congenital malformations and in clarifying syndromic clusters is suggested.

  14. Question analysis for Indonesian comparative question

    NASA Astrophysics Data System (ADS)

    Saelan, A.; Purwarianti, A.; Widyantoro, D. H.

    2017-01-01

    Information seeking is one of human needs today. Comparing things using search engine surely take more times than search only one thing. In this paper, we analyzed comparative questions for comparative question answering system. Comparative question is a question that comparing two or more entities. We grouped comparative questions into 5 types: selection between mentioned entities, selection between unmentioned entities, selection between any entity, comparison, and yes or no question. Then we extracted 4 types of information from comparative questions: entity, aspect, comparison, and constraint. We built classifiers for classification task and information extraction task. Features used for classification task are bag of words, whether for information extraction, we used lexical, 2 previous and following words lexical, and previous label as features. We tried 2 scenarios: classification first and extraction first. For classification first, we used classification result as a feature for extraction. Otherwise, for extraction first, we used extraction result as features for classification. We found that the result would be better if we do extraction first before classification. For the extraction task, classification using SMO gave the best result (88.78%), while for classification, it is better to use naïve bayes (82.35%).

  15. Aqueous extract of lavender (Lavandula angustifolia) improves the spatial performance of a rat model of Alzheimer's disease.

    PubMed

    Kashani, Masoud Soheili; Tavirani, Mostafa Rezaei; Talaei, Sayyed Alireza; Salami, Mahmoud

    2011-04-01

    Alzheimer's disease (AD) is one of the most important neurodegenerative disorders. It is characterized by dementia including deficits in learning and memory. The present study aimed to evaluate the effects of aqueous extract of lavender (Lavandula angustifolia) on spatial performance of AD rats. Male Wistar rats were first divided into control and AD groups. Rat model of AD was established by intracerebroventricular injection of 10 μg Aβ1-42 20 d prior to administration of the lavender extract. Rats in both groups were then introduced to 2 stages of task learning (with an interval of 20 d) in Morris water maze, each followed by one probe test. After the first stage of spatial learning, control and AD animals received different doses (50, 100 and 200 mg/kg) of the lavender extract. In the first stage of experiment, the latency to locate the hidden platform in AD group was significantly higher than that in control group. However, in the second stage of experiment, control and AD rats that received distilled water (vehicle) showed similar performance, indicating that the maze navigation itself could improve the spatial learning of AD animals. Besides, in the second stage of experiment, control and AD rats that received lavender extract administration at different doses (50, 100, and 200 mg/ kg) spent less time locating the platform (except for the AD rats with 50 mg/kg extract treatment), as compared with their counterparts with vehicle treatment, respectively. In addition, lavender extract significantly improved the performance of control and AD rats in the probe test, only at the dose of 200 mg/kg, as compared with their counterparts with vehicle treatment. The lavender extract can effectively reverse spatial learning deficits in AD rats.

  16. Kinetic modeling of ultrasound-assisted extraction of phenolic compounds from grape marc: influence of acoustic energy density and temperature.

    PubMed

    Tao, Yang; Zhang, Zhihang; Sun, Da-Wen

    2014-07-01

    The effects of acoustic energy density (6.8-47.4 W/L) and temperature (20-50 °C) on the extraction yields of total phenolics and tartaric esters during ultrasound-assisted extraction from grape marc were investigated in this study. The ultrasound treatment was performed in a 25-kHz ultrasound bath system and the 50% aqueous ethanol was used as the solvent. The initial extraction rate and final extraction yield increased with the increase of acoustic energy density and temperature. The two site kinetic model was used to simulate the kinetics of extraction process and the diffusion model based on the Fick's second law was employed to determine the effective diffusion coefficient of phenolics in grape marc. Both models gave satisfactory quality of data fit. The diffusion process was divided into one fast stage and one slow stage and the diffusion coefficients in both stages were calculated. Within the current experimental range, the diffusion coefficients of total phenolics and tartaric esters for both diffusion stages increased with acoustic energy density. Meanwhile, the rise of temperature also resulted in the increase of diffusion coefficients of phenolics except the diffusion coefficient of total phenolics in the fast stage, the value of which being the highest at 40 °C. Moreover, an empirical equation was suggested to correlate the effective diffusion coefficient of phenolics in grape marc with acoustic energy density and temperature. In addition, the performance comparison of ultrasound-assisted extraction and convention methods demonstrates that ultrasound is an effective and promising technology to extract bioactive substances from grape marc. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. "Communicate to vaccinate": the development of a taxonomy of communication interventions to improve routine childhood vaccination.

    PubMed

    Willis, Natalie; Hill, Sophie; Kaufman, Jessica; Lewin, Simon; Kis-Rigo, John; De Castro Freire, Sara Bensaude; Bosch-Capblanch, Xavier; Glenton, Claire; Lin, Vivian; Robinson, Priscilla; Wiysonge, Charles S

    2013-05-11

    Vaccination is a cost-effective public health measure and is central to the Millennium Development Goal of reducing child mortality. However, childhood vaccination coverage remains sub-optimal in many settings. While communication is a key feature of vaccination programmes, we are not aware of any comprehensive approach to organising the broad range of communication interventions that can be delivered to parents and communities to improve vaccination coverage. Developing a classification system (taxonomy) organised into conceptually similar categories will aid in: understanding the relationships between different types of communication interventions; facilitating conceptual mapping of these interventions; clarifying the key purposes and features of interventions to aid implementation and evaluation; and identifying areas where evidence is strong and where there are gaps. This paper reports on the development of the 'Communicate to vaccinate' taxonomy. The taxonomy was developed in two stages. Stage 1 included: 1) forming an advisory group; 2) searching for descriptions of interventions in trials (CENTRAL database) and general health literature (Medline); 3) developing a sampling strategy; 4) screening the search results; 5) developing a data extraction form; and 6) extracting intervention data. Stage 2 included: 1) grouping the interventions according to purpose; 2) holding deliberative forums in English and French with key vaccination stakeholders to gather feedback; 3) conducting a targeted search of grey literature to supplement the taxonomy; 4) finalising the taxonomy based on the input provided. The taxonomy includes seven main categories of communication interventions: inform or educate, remind or recall, teach skills, provide support, facilitate decision making, enable communication and enhance community ownership. These categories are broken down into 43 intervention types across three target groups: parents or soon-to-be-parents; communities, community members or volunteers; and health care providers. Our taxonomy illuminates and organises this field and identifies the range of available communication interventions to increase routine childhood vaccination uptake. We have utilised a variety of data sources, capturing information from rigorous evaluations such as randomised trials as well as experiences and knowledge of practitioners and vaccination stakeholders. The taxonomy reflects current public health practice and can guide the future development of vaccination programmes.

  18. “Communicate to vaccinate”: the development of a taxonomy of communication interventions to improve routine childhood vaccination

    PubMed Central

    2013-01-01

    Background Vaccination is a cost-effective public health measure and is central to the Millennium Development Goal of reducing child mortality. However, childhood vaccination coverage remains sub-optimal in many settings. While communication is a key feature of vaccination programmes, we are not aware of any comprehensive approach to organising the broad range of communication interventions that can be delivered to parents and communities to improve vaccination coverage. Developing a classification system (taxonomy) organised into conceptually similar categories will aid in: understanding the relationships between different types of communication interventions; facilitating conceptual mapping of these interventions; clarifying the key purposes and features of interventions to aid implementation and evaluation; and identifying areas where evidence is strong and where there are gaps. This paper reports on the development of the ‘Communicate to vaccinate’ taxonomy. Methods The taxonomy was developed in two stages. Stage 1 included: 1) forming an advisory group; 2) searching for descriptions of interventions in trials (CENTRAL database) and general health literature (Medline); 3) developing a sampling strategy; 4) screening the search results; 5) developing a data extraction form; and 6) extracting intervention data. Stage 2 included: 1) grouping the interventions according to purpose; 2) holding deliberative forums in English and French with key vaccination stakeholders to gather feedback; 3) conducting a targeted search of grey literature to supplement the taxonomy; 4) finalising the taxonomy based on the input provided. Results The taxonomy includes seven main categories of communication interventions: inform or educate, remind or recall, teach skills, provide support, facilitate decision making, enable communication and enhance community ownership. These categories are broken down into 43 intervention types across three target groups: parents or soon-to-be-parents; communities, community members or volunteers; and health care providers. Conclusions Our taxonomy illuminates and organises this field and identifies the range of available communication interventions to increase routine childhood vaccination uptake. We have utilised a variety of data sources, capturing information from rigorous evaluations such as randomised trials as well as experiences and knowledge of practitioners and vaccination stakeholders. The taxonomy reflects current public health practice and can guide the future development of vaccination programmes. PMID:23663327

  19. Information based universal feature extraction

    NASA Astrophysics Data System (ADS)

    Amiri, Mohammad; Brause, Rüdiger

    2015-02-01

    In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.

  20. Sensor-based auto-focusing system using multi-scale feature extraction and phase correlation matching.

    PubMed

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-03-10

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.

  1. Sensor-Based Auto-Focusing System Using Multi-Scale Feature Extraction and Phase Correlation Matching

    PubMed Central

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-01-01

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645

  2. A new automated spectral feature extraction method and its application in spectral classification and defective spectra recovery

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Guo, Ping; Luo, A.-Li

    2017-03-01

    Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.

  3. A Study for Texture Feature Extraction of High-Resolution Satellite Images Based on a Direction Measure and Gray Level Co-Occurrence Matrix Fusion Algorithm

    PubMed Central

    Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao

    2017-01-01

    To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181

  4. An approach to predict Sudden Cardiac Death (SCD) using time domain and bispectrum features from HRV signal.

    PubMed

    Houshyarifar, Vahid; Chehel Amirani, Mehdi

    2016-08-12

    In this paper we present a method to predict Sudden Cardiac Arrest (SCA) with higher order spectral (HOS) and linear (Time) features extracted from heart rate variability (HRV) signal. Predicting the occurrence of SCA is important in order to avoid the probability of Sudden Cardiac Death (SCD). This work is a challenge to predict five minutes before SCA onset. The method consists of four steps: pre-processing, feature extraction, feature reduction, and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the HRV signal is extracted. In second step, bispectrum features of HRV signal and time-domain features are obtained. Six features are extracted from bispectrum and two features from time-domain. In the next step, these features are reduced to one feature by the linear discriminant analysis (LDA) technique. Finally, KNN and support vector machine-based classifiers are used to classify the HRV signals. We used two database named, MIT/BIH Sudden Cardiac Death (SCD) Database and Physiobank Normal Sinus Rhythm (NSR). In this work we achieved prediction of SCD occurrence for six minutes before the SCA with the accuracy over 91%.

  5. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  6. Hierarchical clustering of EMD based interest points for road sign detection

    NASA Astrophysics Data System (ADS)

    Khan, Jesmin; Bhuiyan, Sharif; Adhami, Reza

    2014-04-01

    This paper presents an automatic road traffic signs detection and recognition system based on hierarchical clustering of interest points and joint transform correlation. The proposed algorithm consists of the three following stages: interest points detection, clustering of those points and similarity search. At the first stage, good discriminative, rotation and scale invariant interest points are selected from the image edges based on the 1-D empirical mode decomposition (EMD). We propose a two-step unsupervised clustering technique, which is adaptive and based on two criterion. In this context, the detected points are initially clustered based on the stable local features related to the brightness and color, which are extracted using Gabor filter. Then points belonging to each partition are reclustered depending on the dispersion of the points in the initial cluster using position feature. This two-step hierarchical clustering yields the possible candidate road signs or the region of interests (ROIs). Finally, a fringe-adjusted joint transform correlation (JTC) technique is used for matching the unknown signs with the existing known reference road signs stored in the database. The presented framework provides a novel way to detect a road sign from the natural scenes and the results demonstrate the efficacy of the proposed technique, which yields a very low false hit rate.

  7. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2017-02-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  8. Automatic Extraction of Planetary Image Features

    NASA Technical Reports Server (NTRS)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  9. Waveform fitting and geometry analysis for full-waveform lidar feature extraction

    NASA Astrophysics Data System (ADS)

    Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu

    2016-10-01

    This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.

  10. Prostate cancer detection using machine learning techniques by employing combination of features extracting strategies.

    PubMed

    Hussain, Lal; Ahmed, Adeel; Saeed, Sharjil; Rathore, Saima; Awan, Imtiaz Ahmed; Shah, Saeed Arif; Majid, Abdul; Idris, Adnan; Awan, Anees Ahmed

    2018-02-06

    Prostate is a second leading causes of cancer deaths among men. Early detection of cancer can effectively reduce the rate of mortality caused by Prostate cancer. Due to high and multiresolution of MRIs from prostate cancer require a proper diagnostic systems and tools. In the past researchers developed Computer aided diagnosis (CAD) systems that help the radiologist to detect the abnormalities. In this research paper, we have employed novel Machine learning techniques such as Bayesian approach, Support vector machine (SVM) kernels: polynomial, radial base function (RBF) and Gaussian and Decision Tree for detecting prostate cancer. Moreover, different features extracting strategies are proposed to improve the detection performance. The features extracting strategies are based on texture, morphological, scale invariant feature transform (SIFT), and elliptic Fourier descriptors (EFDs) features. The performance was evaluated based on single as well as combination of features using Machine Learning Classification techniques. The Cross validation (Jack-knife k-fold) was performed and performance was evaluated in term of receiver operating curve (ROC) and specificity, sensitivity, Positive predictive value (PPV), negative predictive value (NPV), false positive rate (FPR). Based on single features extracting strategies, SVM Gaussian Kernel gives the highest accuracy of 98.34% with AUC of 0.999. While, using combination of features extracting strategies, SVM Gaussian kernel with texture + morphological, and EFDs + morphological features give the highest accuracy of 99.71% and AUC of 1.00.

  11. Knee Joint Vibration Signal Analysis with Matching Pursuit Decomposition and Dynamic Weighted Classifier Fusion

    PubMed Central

    Cai, Suxian; Yang, Shanshan; Zheng, Fang; Lu, Meng; Wu, Yunfeng; Krishnan, Sridhar

    2013-01-01

    Analysis of knee joint vibration (VAG) signals can provide quantitative indices for detection of knee joint pathology at an early stage. In addition to the statistical features developed in the related previous studies, we extracted two separable features, that is, the number of atoms derived from the wavelet matching pursuit decomposition and the number of significant signal turns detected with the fixed threshold in the time domain. To perform a better classification over the data set of 89 VAG signals, we applied a novel classifier fusion system based on the dynamic weighted fusion (DWF) method to ameliorate the classification performance. For comparison, a single leastsquares support vector machine (LS-SVM) and the Bagging ensemble were used for the classification task as well. The results in terms of overall accuracy in percentage and area under the receiver operating characteristic curve obtained with the DWF-based classifier fusion method reached 88.76% and 0.9515, respectively, which demonstrated the effectiveness and superiority of the DWF method with two distinct features for the VAG signal analysis. PMID:23573175

  12. View-invariant gait recognition method by three-dimensional convolutional neural network

    NASA Astrophysics Data System (ADS)

    Xing, Weiwei; Li, Ying; Zhang, Shunli

    2018-01-01

    Gait as an important biometric feature can identify a human at a long distance. View change is one of the most challenging factors for gait recognition. To address the cross view issues in gait recognition, we propose a view-invariant gait recognition method by three-dimensional (3-D) convolutional neural network. First, 3-D convolutional neural network (3DCNN) is introduced to learn view-invariant feature, which can capture the spatial information and temporal information simultaneously on normalized silhouette sequences. Second, a network training method based on cross-domain transfer learning is proposed to solve the problem of the limited gait training samples. We choose the C3D as the basic model, which is pretrained on the Sports-1M and then fine-tune C3D model to adapt gait recognition. In the recognition stage, we use the fine-tuned model to extract gait features and use Euclidean distance to measure the similarity of gait sequences. Sufficient experiments are carried out on the CASIA-B dataset and the experimental results demonstrate that our method outperforms many other methods.

  13. Defining the Optimal Region of Interest for Hyperemia Grading in the Bulbar Conjunctiva

    PubMed Central

    Sánchez Brea, María Luisa; Mosquera González, Antonio; Evans, Katharine; Pena-Verdeal, Hugo

    2016-01-01

    Conjunctival hyperemia or conjunctival redness is a symptom that can be associated with a broad group of ocular diseases. Its levels of severity are represented by standard photographic charts that are visually compared with the patient's eye. This way, the hyperemia diagnosis becomes a nonrepeatable task that depends on the experience of the grader. To solve this problem, we have proposed a computer-aided methodology that comprises three main stages: the segmentation of the conjunctiva, the extraction of features in this region based on colour and the presence of blood vessels, and, finally, the transformation of these features into grading scale values by means of regression techniques. However, the conjunctival segmentation can be slightly inaccurate mainly due to illumination issues. In this work, we analyse the relevance of different features with respect to their location within the conjunctiva in order to delimit a reliable region of interest for the grading. The results show that the automatic procedure behaves like an expert using only a limited region of interest within the conjunctiva. PMID:28096890

  14. Built-up Areas Extraction in High Resolution SAR Imagery based on the method of Multiple Feature Weighted Fusion

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.

    2015-06-01

    Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.

  15. Biomimetic postcapillary expansions for enhancing rare blood cell separation on a microfluidic chip†

    PubMed Central

    Jain, Abhishek

    2013-01-01

    Blood cells naturally auto-segregate in postcapillary venules, with the erythrocytes (red blood cells, RBCs) aggregating near the axis of flow and the nucleated cells (NCs)—which include leukocytes, progenitor cells and, in cancer patients, circulating tumor cells—marginating toward the vessel wall. We have used this principle to design a microfluidic device that extracts nucleated cells (NCs) from whole blood. Fabricated using polydimethylsiloxane (PDMS) soft lithography, the biomimetic cell extraction device consists of rectangular microchannels that are 20–400 μm wide, 11 μm deep and up to 2 cm long. The key design feature is the use of repeated expansions/contractions of triangular geometry mimicking postcapillary venules, which enhance margination and optimize the extraction. The device operates on unprocessed whole blood and is able to extract 94 ± 4.5% of NCs with 45.75 ± 2.5-fold enrichment in concentration at a rate of 5 nl s−1. The device eliminates the need to preprocess blood via centrifugation or RBC lysis, and is ready to be implemented as the initial stage of lab-on-a-chip devices that require enriched nucleated cells. The potential downstream applications are numerous, encompassing all preclinical and clinical assays that operate on enriched NC populations and include on-chip flow cytometry PMID:21773633

  16. A method for automatic feature points extraction of human vertebrae three-dimensional model

    NASA Astrophysics Data System (ADS)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  17. Extraction of linear features on SAR imagery

    NASA Astrophysics Data System (ADS)

    Liu, Junyi; Li, Deren; Mei, Xin

    2006-10-01

    Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.

  18. Feature extraction based on semi-supervised kernel Marginal Fisher analysis and its application in bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Xuan, Jianping; Shi, Tielin

    2013-12-01

    Generally, the vibration signals of faulty machinery are non-stationary and nonlinear under complicated operating conditions. Therefore, it is a big challenge for machinery fault diagnosis to extract optimal features for improving classification accuracy. This paper proposes semi-supervised kernel Marginal Fisher analysis (SSKMFA) for feature extraction, which can discover the intrinsic manifold structure of dataset, and simultaneously consider the intra-class compactness and the inter-class separability. Based on SSKMFA, a novel approach to fault diagnosis is put forward and applied to fault recognition of rolling bearings. SSKMFA directly extracts the low-dimensional characteristics from the raw high-dimensional vibration signals, by exploiting the inherent manifold structure of both labeled and unlabeled samples. Subsequently, the optimal low-dimensional features are fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories and severities of bearings. The experimental results demonstrate that the proposed approach improves the fault recognition performance and outperforms the other four feature extraction methods.

  19. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram

    PubMed Central

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-01-01

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171

  20. [A novel method of multi-channel feature extraction combining multivariate autoregression and multiple-linear principal component analysis].

    PubMed

    Wang, Jinjia; Zhang, Yanna

    2015-02-01

    Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.

Top