Soltaninejad, Mohammadreza; Yang, Guang; Lambrou, Tryphon; Allinson, Nigel; Jones, Timothy L; Barrick, Thomas R; Howe, Franklyn A; Ye, Xujiong
2018-04-01
Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. Copyright © 2018 Elsevier B.V. All rights reserved.
Discriminative segmentation of microscopic cellular images.
Cheng, Li; Ye, Ning; Yu, Weimiao; Cheah, Andre
2011-01-01
Microscopic cellular images segmentation has become an important routine procedure in modern biological research, due to the rapid advancement of fluorescence probes and robotic microscopes in recent years. In this paper we advocate a discriminative learning approach for cellular image segmentation. In particular, three new features are proposed to capture the appearance, shape and context information, respectively. Experiments are conducted on three different cellular image datasets. Despite the significant disparity among these datasets, the proposed approach is demonstrated to perform reasonably well. As expected, for a particular dataset, some features turn out to be more suitable than others. Interestingly, we observe that a further gain can often be obtained on top of using the "good" features, by also retaining those features that perform poorly. This might be due to the complementary nature of these features, as well as the capacity of our approach to better integrate and exploit different sources of information.
Prinyakupt, Jaroonrut; Pluempitiwiriyawej, Charnchai
2015-06-30
Blood smear microscopic images are routinely investigated by haematologists to diagnose most blood diseases. However, the task is quite tedious and time consuming. An automatic detection and classification of white blood cells within such images can accelerate the process tremendously. In this paper we propose a system to locate white blood cells within microscopic blood smear images, segment them into nucleus and cytoplasm regions, extract suitable features and finally, classify them into five types: basophil, eosinophil, neutrophil, lymphocyte and monocyte. Two sets of blood smear images were used in this study's experiments. Dataset 1, collected from Rangsit University, were normal peripheral blood slides under light microscope with 100× magnification; 555 images with 601 white blood cells were captured by a Nikon DS-Fi2 high-definition color camera and saved in JPG format of size 960 × 1,280 pixels at 15 pixels per 1 μm resolution. In dataset 2, 477 cropped white blood cell images were downloaded from CellaVision.com. They are in JPG format of size 360 × 363 pixels. The resolution is estimated to be 10 pixels per 1 μm. The proposed system comprises a pre-processing step, nucleus segmentation, cell segmentation, feature extraction, feature selection and classification. The main concept of the segmentation algorithm employed uses white blood cell's morphological properties and the calibrated size of a real cell relative to image resolution. The segmentation process combined thresholding, morphological operation and ellipse curve fitting. Consequently, several features were extracted from the segmented nucleus and cytoplasm regions. Prominent features were then chosen by a greedy search algorithm called sequential forward selection. Finally, with a set of selected prominent features, both linear and naïve Bayes classifiers were applied for performance comparison. This system was tested on normal peripheral blood smear slide images from two datasets. Two sets of comparison were performed: segmentation and classification. The automatically segmented results were compared to the ones obtained manually by a haematologist. It was found that the proposed method is consistent and coherent in both datasets, with dice similarity of 98.9 and 91.6% for average segmented nucleus and cell regions, respectively. Furthermore, the overall correction rate in the classification phase is about 98 and 94% for linear and naïve Bayes models, respectively. The proposed system, based on normal white blood cell morphology and its characteristics, was applied to two different datasets. The results of the calibrated segmentation process on both datasets are fast, robust, efficient and coherent. Meanwhile, the classification of normal white blood cells into five types shows high sensitivity in both linear and naïve Bayes models, with slightly better results in the linear classifier.
Representation learning: a unified deep learning framework for automatic prostate MR segmentation.
Liao, Shu; Gao, Yaozong; Oto, Aytekin; Shen, Dinggang
2013-01-01
Image representation plays an important role in medical image analysis. The key to the success of different medical image analysis algorithms is heavily dependent on how we represent the input data, namely features used to characterize the input image. In the literature, feature engineering remains as an active research topic, and many novel hand-crafted features are designed such as Haar wavelet, histogram of oriented gradient, and local binary patterns. However, such features are not designed with the guidance of the underlying dataset at hand. To this end, we argue that the most effective features should be designed in a learning based manner, namely representation learning, which can be adapted to different patient datasets at hand. In this paper, we introduce a deep learning framework to achieve this goal. Specifically, a stacked independent subspace analysis (ISA) network is adopted to learn the most effective features in a hierarchical and unsupervised manner. The learnt features are adapted to the dataset at hand and encode high level semantic anatomical information. The proposed method is evaluated on the application of automatic prostate MR segmentation. Experimental results show that significant segmentation accuracy improvement can be achieved by the proposed deep learning method compared to other state-of-the-art segmentation approaches.
Automated pixel-wise brain tissue segmentation of diffusion-weighted images via machine learning.
Ciritsis, Alexander; Boss, Andreas; Rossi, Cristina
2018-04-26
The diffusion-weighted (DW) MR signal sampled over a wide range of b-values potentially allows for tissue differentiation in terms of cellularity, microstructure, perfusion, and T 2 relaxivity. This study aimed to implement a machine learning algorithm for automatic brain tissue segmentation from DW-MRI datasets, and to determine the optimal sub-set of features for accurate segmentation. DWI was performed at 3 T in eight healthy volunteers using 15 b-values and 20 diffusion-encoding directions. The pixel-wise signal attenuation, as well as the trace and fractional anisotropy (FA) of the diffusion tensor, were used as features to train a support vector machine classifier for gray matter, white matter, and cerebrospinal fluid classes. The datasets of two volunteers were used for validation. For each subject, tissue classification was also performed on 3D T 1 -weighted data sets with a probabilistic framework. Confusion matrices were generated for quantitative assessment of image classification accuracy in comparison with the reference method. DWI-based tissue segmentation resulted in an accuracy of 82.1% on the validation dataset and of 82.2% on the training dataset, excluding relevant model over-fitting. A mean Dice coefficient (DSC) of 0.79 ± 0.08 was found. About 50% of the classification performance was attributable to five features (i.e. signal measured at b-values of 5/10/500/1200 s/mm 2 and the FA). This reduced set of features led to almost identical performances for the validation (82.2%) and the training (81.4%) datasets (DSC = 0.79 ± 0.08). Machine learning techniques applied to DWI data allow for accurate brain tissue segmentation based on both morphological and functional information. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Somasundaram, Elanchezhian; Kaufman, Robert; Brady, Samuel
2017-03-01
The development of a random forests machine learning technique is presented for fully-automated neck, chest, abdomen, and pelvis tissue segmentation of CT images using Trainable WEKA (Waikato Environment for Knowledge Analysis) Segmentation (TWS) plugin of FIJI (ImageJ, NIH). The use of a single classifier model to segment six tissue classes (lung, fat, muscle, solid organ, blood/contrast agent, bone) in the CT images is studied. An automated unbiased scheme to sample pixels from the training images and generate a balanced training dataset over the seven classes is also developed. Two independent training datasets are generated from a pool of 4 adult (>55 kg) and 3 pediatric patients (<=55 kg) with 7 manually contoured slices for each patient. Classifier training investigated 28 image filters comprising a total of 272 features. Highly correlated and insignificant features are eliminated using Correlated Feature Subset (CFS) selection with Best First Search (BFS) algorithms in WEKA. The 2 training models (from the 2 training datasets) had 74 and 71 input training features, respectively. The study also investigated the effect of varying the number of trees (25, 50, 100, and 200) in the random forest algorithm. The performance of the 2 classifier models are evaluated on inter-patient intra-slice, intrapatient inter-slice and inter-patient inter-slice test datasets. The Dice similarity coefficients (DSC) and confusion matrices are used to understand the performance of the classifiers across the tissue segments. The effect of number of features in the training input on the performance of the classifiers for tissue classes with less than optimal DSC values is also studied. The average DSC values for the two training models on the inter-patient intra-slice test data are: 0.98, 0.89, 0.87, 0.79, 0.68, and 0.84, for lung, fat, muscle, solid organ, blood/contrast agent, and bone, respectively. The study demonstrated that a robust segmentation accuracy for lung, muscle and fat tissue classes. For solid-organ, blood/contrast and bone, the performance of the segmentation pipeline improved significantly by using the advanced capabilities of WEKA. However, further improvements are needed to reduce the noise in the segmentation.
NASA Astrophysics Data System (ADS)
Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.
2018-05-01
In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.
NASA Astrophysics Data System (ADS)
Paul, Subir; Nagesh Kumar, D.
2018-04-01
Hyperspectral (HS) data comprises of continuous spectral responses of hundreds of narrow spectral bands with very fine spectral resolution or bandwidth, which offer feature identification and classification with high accuracy. In the present study, Mutual Information (MI) based Segmented Stacked Autoencoder (S-SAE) approach for spectral-spatial classification of the HS data is proposed to reduce the complexity and computational time compared to Stacked Autoencoder (SAE) based feature extraction. A non-parametric dependency measure (MI) based spectral segmentation is proposed instead of linear and parametric dependency measure to take care of both linear and nonlinear inter-band dependency for spectral segmentation of the HS bands. Then morphological profiles are created corresponding to segmented spectral features to assimilate the spatial information in the spectral-spatial classification approach. Two non-parametric classifiers, Support Vector Machine (SVM) with Gaussian kernel and Random Forest (RF) are used for classification of the three most popularly used HS datasets. Results of the numerical experiments carried out in this study have shown that SVM with a Gaussian kernel is providing better results for the Pavia University and Botswana datasets whereas RF is performing better for Indian Pines dataset. The experiments performed with the proposed methodology provide encouraging results compared to numerous existing approaches.
Multi-Sectional Views Textural Based SVM for MS Lesion Segmentation in Multi-Channels MRIs
Abdullah, Bassem A; Younis, Akmal A; John, Nigel M
2012-01-01
In this paper, a new technique is proposed for automatic segmentation of multiple sclerosis (MS) lesions from brain magnetic resonance imaging (MRI) data. The technique uses a trained support vector machine (SVM) to discriminate between the blocks in regions of MS lesions and the blocks in non-MS lesion regions mainly based on the textural features with aid of the other features. The classification is done on each of the axial, sagittal and coronal sectional brain view independently and the resultant segmentations are aggregated to provide more accurate output segmentation. The main contribution of the proposed technique described in this paper is the use of textural features to detect MS lesions in a fully automated approach that does not rely on manually delineating the MS lesions. In addition, the technique introduces the concept of the multi-sectional view segmentation to produce verified segmentation. The proposed textural-based SVM technique was evaluated using three simulated datasets and more than fifty real MRI datasets. The results were compared with state of the art methods. The obtained results indicate that the proposed method would be viable for use in clinical practice for the detection of MS lesions in MRI. PMID:22741026
Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E.
2016-01-01
Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth. PMID:27159849
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-03-08
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-01-01
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062
Zhao, Guangjun; Wang, Xuchu; Niu, Yanmin; Tan, Liwen; Zhang, Shao-Xiang
2016-01-01
Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain. PMID:27057543
Zhao, Guangjun; Wang, Xuchu; Niu, Yanmin; Tan, Liwen; Zhang, Shao-Xiang
2016-01-01
Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain.
Recognition of skin melanoma through dermoscopic image analysis
NASA Astrophysics Data System (ADS)
Gómez, Catalina; Herrera, Diana Sofia
2017-11-01
Melanoma skin cancer diagnosis can be challenging due to the similarities of the early stage symptoms with regular moles. Standardized visual parameters can be determined and characterized to suspect a melanoma cancer type. The automation of this diagnosis could have an impact in the medical field by providing a tool to support the specialists with high accuracy. The objective of this study is to develop an algorithm trained to distinguish a highly probable melanoma from a non-dangerous mole by the segmentation and classification of dermoscopic mole images. We evaluate our approach on the dataset provided by the International Skin Imaging Collaboration used in the International Challenge Skin Lesion Analysis Towards Melanoma Detection. For the segmentation task, we apply a preprocessing algorithm and use Otsu's thresholding in the best performing color space; the average Jaccard Index in the test dataset is 70.05%. For the subsequent classification stage, we use joint histograms in the YCbCr color space, a RBF Gaussian SVM trained with five features concerning circularity and irregularity of the segmented lesion, and the Gray Level Co-occurrence matrix features for texture analysis. These features are combined to obtain an Average Classification Accuracy of 63.3% in the test dataset.
Multi-fractal texture features for brain tumor and edema segmentation
NASA Astrophysics Data System (ADS)
Reza, S.; Iftekharuddin, K. M.
2014-03-01
In this work, we propose a fully automatic brain tumor and edema segmentation technique in brain magnetic resonance (MR) images. Different brain tissues are characterized using the novel texture features such as piece-wise triangular prism surface area (PTPSA), multi-fractional Brownian motion (mBm) and Gabor-like textons, along with regular intensity and intensity difference features. Classical Random Forest (RF) classifier is used to formulate the segmentation task as classification of these features in multi-modal MRIs. The segmentation performance is compared with other state-of-art works using a publicly available dataset known as Brain Tumor Segmentation (BRATS) 2012 [1]. Quantitative evaluation is done using the online evaluation tool from Kitware/MIDAS website [2]. The results show that our segmentation performance is more consistent and, on the average, outperforms other state-of-the art works in both training and challenge cases in the BRATS competition.
A robust dataset-agnostic heart disease classifier from Phonocardiogram.
Banerjee, Rohan; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan; Mandana, K M
2017-07-01
Automatic classification of normal and abnormal heart sounds is a popular area of research. However, building a robust algorithm unaffected by signal quality and patient demography is a challenge. In this paper we have analysed a wide list of Phonocardiogram (PCG) features in time and frequency domain along with morphological and statistical features to construct a robust and discriminative feature set for dataset-agnostic classification of normal and cardiac patients. The large and open access database, made available in Physionet 2016 challenge was used for feature selection, internal validation and creation of training models. A second dataset of 41 PCG segments, collected using our in-house smart phone based digital stethoscope from an Indian hospital was used for performance evaluation. Our proposed methodology yielded sensitivity and specificity scores of 0.76 and 0.75 respectively on the test dataset in classifying cardiovascular diseases. The methodology also outperformed three popular prior art approaches, when applied on the same dataset.
Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul
2011-01-01
Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publiclymore » available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.« less
A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation
NASA Astrophysics Data System (ADS)
Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava
2015-12-01
In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.
Weakly supervised image semantic segmentation based on clustering superpixels
NASA Astrophysics Data System (ADS)
Yan, Xiong; Liu, Xiaohua
2018-04-01
In this paper, we propose an image semantic segmentation model which is trained from image-level labeled images. The proposed model starts with superpixel segmenting, and features of the superpixels are extracted by trained CNN. We introduce a superpixel-based graph followed by applying the graph partition method to group correlated superpixels into clusters. For the acquisition of inter-label correlations between the image-level labels in dataset, we not only utilize label co-occurrence statistics but also exploit visual contextual cues simultaneously. At last, we formulate the task of mapping appropriate image-level labels to the detected clusters as a problem of convex minimization. Experimental results on MSRC-21 dataset and LableMe dataset show that the proposed method has a better performance than most of the weakly supervised methods and is even comparable to fully supervised methods.
NASA Astrophysics Data System (ADS)
Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.
Hayat, Maqsood; Tahir, Muhammad
2015-08-01
Membrane protein is a central component of the cell that manages intra and extracellular processes. Membrane proteins execute a diversity of functions that are vital for the survival of organisms. The topology of transmembrane proteins describes the number of transmembrane (TM) helix segments and its orientation. However, owing to the lack of its recognized structures, the identification of TM helix and its topology through experimental methods is laborious with low throughput. In order to identify TM helix segments reliably, accurately, and effectively from topogenic sequences, we propose the PSOFuzzySVM-TMH model. In this model, evolutionary based information position specific scoring matrix and discrete based information 6-letter exchange group are used to formulate transmembrane protein sequences. The noisy and extraneous attributes are eradicated using an optimization selection technique, particle swarm optimization, from both feature spaces. Finally, the selected feature spaces are combined in order to form ensemble feature space. Fuzzy-support vector Machine is utilized as a classification algorithm. Two benchmark datasets, including low and high resolution datasets, are used. At various levels, the performance of the PSOFuzzySVM-TMH model is assessed through 10-fold cross validation test. The empirical results reveal that the proposed framework PSOFuzzySVM-TMH outperforms in terms of classification performance in the examined datasets. It is ascertained that the proposed model might be a useful and high throughput tool for academia and research community for further structure and functional studies on transmembrane proteins.
NASA Astrophysics Data System (ADS)
Maier, Oskar; Wilms, Matthias; von der Gablentz, Janina; Krämer, Ulrike; Handels, Heinz
2014-03-01
Automatic segmentation of ischemic stroke lesions in magnetic resonance (MR) images is important in clinical practice and for neuroscientific trials. The key problem is to detect largely inhomogeneous regions of varying sizes, shapes and locations. We present a stroke lesion segmentation method based on local features extracted from multi-spectral MR data that are selected to model a human observer's discrimination criteria. A support vector machine classifier is trained on expert-segmented examples and then used to classify formerly unseen images. Leave-one-out cross validation on eight datasets with lesions of varying appearances is performed, showing our method to compare favourably with other published approaches in terms of accuracy and robustness. Furthermore, we compare a number of feature selectors and closely examine each feature's and MR sequence's contribution.
Zheng, Yalin; Kwong, Man Ting; MacCormick, Ian J. C.; Beare, Nicholas A. V.; Harding, Simon P.
2014-01-01
Capillary non-perfusion (CNP) in the retina is a characteristic feature used in the management of a wide range of retinal diseases. There is no well-established computation tool for assessing the extent of CNP. We propose a novel texture segmentation framework to address this problem. This framework comprises three major steps: pre-processing, unsupervised total variation texture segmentation, and supervised segmentation. It employs a state-of-the-art multiphase total variation texture segmentation model which is enhanced by new kernel based region terms. The model can be applied to texture and intensity-based multiphase problems. A supervised segmentation step allows the framework to take expert knowledge into account, an AdaBoost classifier with weighted cost coefficient is chosen to tackle imbalanced data classification problems. To demonstrate its effectiveness, we applied this framework to 48 images from malarial retinopathy and 10 images from ischemic diabetic maculopathy. The performance of segmentation is satisfactory when compared to a reference standard of manual delineations: accuracy, sensitivity and specificity are 89.0%, 73.0%, and 90.8% respectively for the malarial retinopathy dataset and 80.8%, 70.6%, and 82.1% respectively for the diabetic maculopathy dataset. In terms of region-wise analysis, this method achieved an accuracy of 76.3% (45 out of 59 regions) for the malarial retinopathy dataset and 73.9% (17 out of 26 regions) for the diabetic maculopathy dataset. This comprehensive segmentation framework can quantify capillary non-perfusion in retinopathy from two distinct etiologies, and has the potential to be adopted for wider applications. PMID:24747681
DeepPap: Deep Convolutional Networks for Cervical Cell Classification.
Zhang, Ling; Le Lu; Nogues, Isabella; Summers, Ronald M; Liu, Shaoxiong; Yao, Jianhua
2017-11-01
Automation-assisted cervical screening via Pap smear or liquid-based cytology (LBC) is a highly effective cell imaging based cancer detection tool, where cells are partitioned into "abnormal" and "normal" categories. However, the success of most traditional classification methods relies on the presence of accurate cell segmentations. Despite sixty years of research in this field, accurate segmentation remains a challenge in the presence of cell clusters and pathologies. Moreover, previous classification methods are only built upon the extraction of hand-crafted features, such as morphology and texture. This paper addresses these limitations by proposing a method to directly classify cervical cells-without prior segmentation-based on deep features, using convolutional neural networks (ConvNets). First, the ConvNet is pretrained on a natural image dataset. It is subsequently fine-tuned on a cervical cell dataset consisting of adaptively resampled image patches coarsely centered on the nuclei. In the testing phase, aggregation is used to average the prediction scores of a similar set of image patches. The proposed method is evaluated on both Pap smear and LBC datasets. Results show that our method outperforms previous algorithms in classification accuracy (98.3%), area under the curve (0.99) values, and especially specificity (98.3%), when applied to the Herlev benchmark Pap smear dataset and evaluated using five-fold cross validation. Similar superior performances are also achieved on the HEMLBC (H&E stained manual LBC) dataset. Our method is promising for the development of automation-assisted reading systems in primary cervical screening.
DCS-SVM: a novel semi-automated method for human brain MR image segmentation.
Ahmadvand, Ali; Daliri, Mohammad Reza; Hajiali, Mohammadtaghi
2017-11-27
In this paper, a novel method is proposed which appropriately segments magnetic resonance (MR) brain images into three main tissues. This paper proposes an extension of our previous work in which we suggested a combination of multiple classifiers (CMC)-based methods named dynamic classifier selection-dynamic local training local Tanimoto index (DCS-DLTLTI) for MR brain image segmentation into three main cerebral tissues. This idea is used here and a novel method is developed that tries to use more complex and accurate classifiers like support vector machine (SVM) in the ensemble. This work is challenging because the CMC-based methods are time consuming, especially on huge datasets like three-dimensional (3D) brain MR images. Moreover, SVM is a powerful method that is used for modeling datasets with complex feature space, but it also has huge computational cost for big datasets, especially those with strong interclass variability problems and with more than two classes such as 3D brain images; therefore, we cannot use SVM in DCS-DLTLTI. Therefore, we propose a novel approach named "DCS-SVM" to use SVM in DCS-DLTLTI to improve the accuracy of segmentation results. The proposed method is applied on well-known datasets of the Internet Brain Segmentation Repository (IBSR) and promising results are obtained.
Kushibar, Kaisar; Valverde, Sergi; González-Villà, Sandra; Bernal, Jose; Cabezas, Mariano; Oliver, Arnau; Lladó, Xavier
2018-06-15
Sub-cortical brain structure segmentation in Magnetic Resonance Images (MRI) has attracted the interest of the research community for a long time as morphological changes in these structures are related to different neurodegenerative disorders. However, manual segmentation of these structures can be tedious and prone to variability, highlighting the need for robust automated segmentation methods. In this paper, we present a novel convolutional neural network based approach for accurate segmentation of the sub-cortical brain structures that combines both convolutional and prior spatial features for improving the segmentation accuracy. In order to increase the accuracy of the automated segmentation, we propose to train the network using a restricted sample selection to force the network to learn the most difficult parts of the structures. We evaluate the accuracy of the proposed method on the public MICCAI 2012 challenge and IBSR 18 datasets, comparing it with different traditional and deep learning state-of-the-art methods. On the MICCAI 2012 dataset, our method shows an excellent performance comparable to the best participant strategy on the challenge, while performing significantly better than state-of-the-art techniques such as FreeSurfer and FIRST. On the IBSR 18 dataset, our method also exhibits a significant increase in the performance with respect to not only FreeSurfer and FIRST, but also comparable or better results than other recent deep learning approaches. Moreover, our experiments show that both the addition of the spatial priors and the restricted sampling strategy have a significant effect on the accuracy of the proposed method. In order to encourage the reproducibility and the use of the proposed method, a public version of our approach is available to download for the neuroimaging community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L
2015-11-18
Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.
SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.
Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga
2013-01-01
High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.
Kashyap, Kanchan L; Bajpai, Manish K; Khanna, Pritee; Giakos, George
2018-01-01
Automatic segmentation of abnormal region is a crucial task in computer-aided detection system using mammograms. In this work, an automatic abnormality detection algorithm using mammographic images is proposed. In the preprocessing step, partial differential equation-based variational level set method is used for breast region extraction. The evolution of the level set method is done by applying mesh-free-based radial basis function (RBF). The limitation of mesh-based approach is removed by using mesh-free-based RBF method. The evolution of variational level set function is also done by mesh-based finite difference method for comparison purpose. Unsharp masking and median filtering is used for mammogram enhancement. Suspicious abnormal regions are segmented by applying fuzzy c-means clustering. Texture features are extracted from the segmented suspicious regions by computing local binary pattern and dominated rotated local binary pattern (DRLBP). Finally, suspicious regions are classified as normal or abnormal regions by means of support vector machine with linear, multilayer perceptron, radial basis, and polynomial kernel function. The algorithm is validated on 322 sample mammograms of mammographic image analysis society (MIAS) and 500 mammograms from digital database for screening mammography (DDSM) datasets. Proficiency of the algorithm is quantified by using sensitivity, specificity, and accuracy. The highest sensitivity, specificity, and accuracy of 93.96%, 95.01%, and 94.48%, respectively, are obtained on MIAS dataset using DRLBP feature with RBF kernel function. Whereas, the highest 92.31% sensitivity, 98.45% specificity, and 96.21% accuracy are achieved on DDSM dataset using DRLBP feature with RBF kernel function. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.
2016-03-01
Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.
Semantic image segmentation with fused CNN features
NASA Astrophysics Data System (ADS)
Geng, Hui-qiang; Zhang, Hua; Xue, Yan-bing; Zhou, Mian; Xu, Guang-ping; Gao, Zan
2017-09-01
Semantic image segmentation is a task to predict a category label for every image pixel. The key challenge of it is to design a strong feature representation. In this paper, we fuse the hierarchical convolutional neural network (CNN) features and the region-based features as the feature representation. The hierarchical features contain more global information, while the region-based features contain more local information. The combination of these two kinds of features significantly enhances the feature representation. Then the fused features are used to train a softmax classifier to produce per-pixel label assignment probability. And a fully connected conditional random field (CRF) is used as a post-processing method to improve the labeling consistency. We conduct experiments on SIFT flow dataset. The pixel accuracy and class accuracy are 84.4% and 34.86%, respectively.
Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching
Guo, Yanrong; Gao, Yaozong
2016-01-01
Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods. PMID:26685226
Wang, Shuo; Zhou, Mu; Liu, Zaiyi; Liu, Zhenyu; Gu, Dongsheng; Zang, Yali; Dong, Di; Gevaert, Olivier; Tian, Jie
2017-08-01
Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule segmentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%. Copyright © 2017. Published by Elsevier B.V.
Random forest feature selection approach for image segmentation
NASA Astrophysics Data System (ADS)
Lefkovits, László; Lefkovits, Szidónia; Emerich, Simina; Vaida, Mircea Florin
2017-03-01
In the field of image segmentation, discriminative models have shown promising performance. Generally, every such model begins with the extraction of numerous features from annotated images. Most authors create their discriminative model by using many features without using any selection criteria. A more reliable model can be built by using a framework that selects the important variables, from the point of view of the classification, and eliminates the unimportant once. In this article we present a framework for feature selection and data dimensionality reduction. The methodology is built around the random forest (RF) algorithm and its variable importance evaluation. In order to deal with datasets so large as to be practically unmanageable, we propose an algorithm based on RF that reduces the dimension of the database by eliminating irrelevant features. Furthermore, this framework is applied to optimize our discriminative model for brain tumor segmentation.
BRAIN TUMOR SEGMENTATION WITH SYMMETRIC TEXTURE AND SYMMETRIC INTENSITY-BASED DECISION FORESTS.
Bianchi, Anthony; Miller, James V; Tan, Ek Tsoon; Montillo, Albert
2013-04-01
Accurate automated segmentation of brain tumors in MR images is challenging due to overlapping tissue intensity distributions and amorphous tumor shape. However, a clinically viable solution providing precise quantification of tumor and edema volume would enable better pre-operative planning, treatment monitoring and drug development. Our contributions are threefold. First, we design efficient gradient and LBPTOP based texture features which improve classification accuracy over standard intensity features. Second, we extend our texture and intensity features to symmetric texture and symmetric intensity which further improve the accuracy for all tissue classes. Third, we demonstrate further accuracy enhancement by extending our long range features from 100mm to a full 200mm. We assess our brain segmentation technique on 20 patients in the BraTS 2012 dataset. Impact from each contribution is measured and the combination of all the features is shown to yield state-of-the-art accuracy and speed.
Image segmentation via foreground and background semantic descriptors
NASA Astrophysics Data System (ADS)
Yuan, Ding; Qiang, Jingjing; Yin, Jihao
2017-09-01
In the field of image processing, it has been a challenging task to obtain a complete foreground that is not uniform in color or texture. Unlike other methods, which segment the image by only using low-level features, we present a segmentation framework, in which high-level visual features, such as semantic information, are used. First, the initial semantic labels were obtained by using the nonparametric method. Then, a subset of the training images, with a similar foreground to the input image, was selected. Consequently, the semantic labels could be further refined according to the subset. Finally, the input image was segmented by integrating the object affinity and refined semantic labels. State-of-the-art performance was achieved in experiments with the challenging MSRC 21 dataset.
Hayat, Maqsood; Khan, Asifullah
2013-05-01
Membrane protein is the prime constituent of a cell, which performs a role of mediator between intra and extracellular processes. The prediction of transmembrane (TM) helix and its topology provides essential information regarding the function and structure of membrane proteins. However, prediction of TM helix and its topology is a challenging issue in bioinformatics and computational biology due to experimental complexities and lack of its established structures. Therefore, the location and orientation of TM helix segments are predicted from topogenic sequences. In this regard, we propose WRF-TMH model for effectively predicting TM helix segments. In this model, information is extracted from membrane protein sequences using compositional index and physicochemical properties. The redundant and irrelevant features are eliminated through singular value decomposition. The selected features provided by these feature extraction strategies are then fused to develop a hybrid model. Weighted random forest is adopted as a classification approach. We have used two benchmark datasets including low and high-resolution datasets. tenfold cross validation is employed to assess the performance of WRF-TMH model at different levels including per protein, per segment, and per residue. The success rates of WRF-TMH model are quite promising and are the best reported so far on the same datasets. It is observed that WRF-TMH model might play a substantial role, and will provide essential information for further structural and functional studies on membrane proteins. The accompanied web predictor is accessible at http://111.68.99.218/WRF-TMH/ .
Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion
NASA Astrophysics Data System (ADS)
Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Tade, Funmilayo; Schuster, David M.; Nieh, Peter; Master, Viraj; Fei, Baowei
2017-02-01
Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.
Multifractal texture estimation for detection and segmentation of brain tumors.
Islam, Atiq; Reza, Syed M S; Iftekharuddin, Khan M
2013-11-01
A stochastic model for characterizing tumor texture in brain magnetic resonance (MR) images is proposed. The efficacy of the model is demonstrated in patient-independent brain tumor texture feature extraction and tumor segmentation in magnetic resonance images (MRIs). Due to complex appearance in MRI, brain tumor texture is formulated using a multiresolution-fractal model known as multifractional Brownian motion (mBm). Detailed mathematical derivation for mBm model and corresponding novel algorithm to extract spatially varying multifractal features are proposed. A multifractal feature-based brain tumor segmentation method is developed next. To evaluate efficacy, tumor segmentation performance using proposed multifractal feature is compared with that using Gabor-like multiscale texton feature. Furthermore, novel patient-independent tumor segmentation scheme is proposed by extending the well-known AdaBoost algorithm. The modification of AdaBoost algorithm involves assigning weights to component classifiers based on their ability to classify difficult samples and confidence in such classification. Experimental results for 14 patients with over 300 MRIs show the efficacy of the proposed technique in automatic segmentation of tumors in brain MRIs. Finally, comparison with other state-of-the art brain tumor segmentation works with publicly available low-grade glioma BRATS2012 dataset show that our segmentation results are more consistent and on the average outperforms these methods for the patients where ground truth is made available.
Multifractal Texture Estimation for Detection and Segmentation of Brain Tumors
Islam, Atiq; Reza, Syed M. S.
2016-01-01
A stochastic model for characterizing tumor texture in brain magnetic resonance (MR) images is proposed. The efficacy of the model is demonstrated in patient-independent brain tumor texture feature extraction and tumor segmentation in magnetic resonance images (MRIs). Due to complex appearance in MRI, brain tumor texture is formulated using a multiresolution-fractal model known as multifractional Brownian motion (mBm). Detailed mathematical derivation for mBm model and corresponding novel algorithm to extract spatially varying multifractal features are proposed. A multifractal feature-based brain tumor segmentation method is developed next. To evaluate efficacy, tumor segmentation performance using proposed multifractal feature is compared with that using Gabor-like multiscale texton feature. Furthermore, novel patient-independent tumor segmentation scheme is proposed by extending the well-known AdaBoost algorithm. The modification of AdaBoost algorithm involves assigning weights to component classifiers based on their ability to classify difficult samples and confidence in such classification. Experimental results for 14 patients with over 300 MRIs show the efficacy of the proposed technique in automatic segmentation of tumors in brain MRIs. Finally, comparison with other state-of-the art brain tumor segmentation works with publicly available low-grade glioma BRATS2012 dataset show that our segmentation results are more consistent and on the average outperforms these methods for the patients where ground truth is made available. PMID:23807424
A Higher-Order Neural Network Design for Improving Segmentation Performance in Medical Image Series
NASA Astrophysics Data System (ADS)
Selvi, Eşref; Selver, M. Alper; Güzeliş, Cüneyt; Dicle, Oǧuz
2014-03-01
Segmentation of anatomical structures from medical image series is an ongoing field of research. Although, organs of interest are three-dimensional in nature, slice-by-slice approaches are widely used in clinical applications because of their ease of integration with the current manual segmentation scheme. To be able to use slice-by-slice techniques effectively, adjacent slice information, which represents likelihood of a region to be the structure of interest, plays critical role. Recent studies focus on using distance transform directly as a feature or to increase the feature values at the vicinity of the search area. This study presents a novel approach by constructing a higher order neural network, the input layer of which receives features together with their multiplications with the distance transform. This allows higher-order interactions between features through the non-linearity introduced by the multiplication. The application of the proposed method to 9 CT datasets for segmentation of the liver shows higher performance than well-known higher order classification neural networks.
NASA Astrophysics Data System (ADS)
Ye, Xujiong; Siddique, Musib; Douiri, Abdel; Beddoe, Gareth; Slabaugh, Greg
2009-02-01
Automatic segmentation of medical images is a challenging problem due to the complexity and variability of human anatomy, poor contrast of the object being segmented, and noise resulting from the image acquisition process. This paper presents a novel feature-guided method for the segmentation of 3D medical lesions. The proposed algorithm combines 1) a volumetric shape feature (shape index) based on high-order partial derivatives; 2) mean shift clustering in a joint spatial-intensity-shape (JSIS) feature space; and 3) a modified expectation-maximization (MEM) algorithm on the mean shift mode map to merge the neighboring regions (modes). In such a scenario, the volumetric shape feature is integrated into the process of the segmentation algorithm. The joint spatial-intensity-shape features provide rich information for the segmentation of the anatomic structures or lesions (tumors). The proposed method has been evaluated on a clinical dataset of thoracic CT scans that contains 68 nodules. A volume overlap ratio between each segmented nodule and the ground truth annotation is calculated. Using the proposed method, the mean overlap ratio over all the nodules is 0.80. On visual inspection and using a quantitative evaluation, the experimental results demonstrate the potential of the proposed method. It can properly segment a variety of nodules including juxta-vascular and juxta-pleural nodules, which are challenging for conventional methods due to the high similarity of intensities between the nodules and their adjacent tissues. This approach could also be applied to lesion segmentation in other anatomies, such as polyps in the colon.
Serag, Ahmed; Wilkinson, Alastair G.; Telford, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Anblagan, Devasuda; Macnaught, Gillian; Semple, Scott I.; Boardman, James P.
2017-01-01
Quantitative volumes from brain magnetic resonance imaging (MRI) acquired across the life course may be useful for investigating long term effects of risk and resilience factors for brain development and healthy aging, and for understanding early life determinants of adult brain structure. Therefore, there is an increasing need for automated segmentation tools that can be applied to images acquired at different life stages. We developed an automatic segmentation method for human brain MRI, where a sliding window approach and a multi-class random forest classifier were applied to high-dimensional feature vectors for accurate segmentation. The method performed well on brain MRI data acquired from 179 individuals, analyzed in three age groups: newborns (38–42 weeks gestational age), children and adolescents (4–17 years) and adults (35–71 years). As the method can learn from partially labeled datasets, it can be used to segment large-scale datasets efficiently. It could also be applied to different populations and imaging modalities across the life course. PMID:28163680
Valcarcel, Alessandra M; Linn, Kristin A; Vandekar, Simon N; Satterthwaite, Theodore D; Muschelli, John; Calabresi, Peter A; Pham, Dzung L; Martin, Melissa Lynne; Shinohara, Russell T
2018-03-08
Magnetic resonance imaging (MRI) is crucial for in vivo detection and characterization of white matter lesions (WMLs) in multiple sclerosis. While WMLs have been studied for over two decades using MRI, automated segmentation remains challenging. Although the majority of statistical techniques for the automated segmentation of WMLs are based on single imaging modalities, recent advances have used multimodal techniques for identifying WMLs. Complementary modalities emphasize different tissue properties, which help identify interrelated features of lesions. Method for Inter-Modal Segmentation Analysis (MIMoSA), a fully automatic lesion segmentation algorithm that utilizes novel covariance features from intermodal coupling regression in addition to mean structure to model the probability lesion is contained in each voxel, is proposed. MIMoSA was validated by comparison with both expert manual and other automated segmentation methods in two datasets. The first included 98 subjects imaged at Johns Hopkins Hospital in which bootstrap cross-validation was used to compare the performance of MIMoSA against OASIS and LesionTOADS, two popular automatic segmentation approaches. For a secondary validation, a publicly available data from a segmentation challenge were used for performance benchmarking. In the Johns Hopkins study, MIMoSA yielded average Sørensen-Dice coefficient (DSC) of .57 and partial AUC of .68 calculated with false positive rates up to 1%. This was superior to performance using OASIS and LesionTOADS. The proposed method also performed competitively in the segmentation challenge dataset. MIMoSA resulted in statistically significant improvements in lesion segmentation performance compared with LesionTOADS and OASIS, and performed competitively in an additional validation study. Copyright © 2018 by the American Society of Neuroimaging.
Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring.
Kallenberg, Michiel; Petersen, Kersten; Nielsen, Mads; Ng, Andrew Y; Pengfei Diao; Igel, Christian; Vachon, Celine M; Holland, Katharina; Winkel, Rikke Rass; Karssemeijer, Nico; Lillholm, Martin
2016-05-01
Mammographic risk scoring has commonly been automated by extracting a set of handcrafted features from mammograms, and relating the responses directly or indirectly to breast cancer risk. We present a method that learns a feature hierarchy from unlabeled data. When the learned features are used as the input to a simple classifier, two different tasks can be addressed: i) breast density segmentation, and ii) scoring of mammographic texture. The proposed model learns features at multiple scales. To control the models capacity a novel sparsity regularizer is introduced that incorporates both lifetime and population sparsity. We evaluated our method on three different clinical datasets. Our state-of-the-art results show that the learned breast density scores have a very strong positive relationship with manual ones, and that the learned texture scores are predictive of breast cancer. The model is easy to apply and generalizes to many other segmentation and scoring problems.
A Generic Deep-Learning-Based Approach for Automated Surface Inspection.
Ren, Ruoxu; Hung, Terence; Tan, Kay Chen
2018-03-01
Automated surface inspection (ASI) is a challenging task in industry, as collecting training dataset is usually costly and related methods are highly dataset-dependent. In this paper, a generic approach that requires small training data for ASI is proposed. First, this approach builds classifier on the features of image patches, where the features are transferred from a pretrained deep learning network. Next, pixel-wise prediction is obtained by convolving the trained classifier over input image. An experiment on three public and one industrial data set is carried out. The experiment involves two tasks: 1) image classification and 2) defect segmentation. The results of proposed algorithm are compared against several best benchmarks in literature. In the classification tasks, the proposed method improves accuracy by 0.66%-25.50%. In the segmentation tasks, the proposed method reduces error escape rates by 6.00%-19.00% in three defect types and improves accuracies by 2.29%-9.86% in all seven defect types. In addition, the proposed method achieves 0.0% error escape rate in the segmentation task of industrial data.
3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.
Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail
2018-04-15
This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Delgado, Juan A.; Altuve, Miguel; Nabhan Homsi, Masun
2015-12-01
This paper introduces a robust method based on the Support Vector Machine (SVM) algorithm to detect the presence of Fetal QRS (fQRS) complexes in electrocardiogram (ECG) recordings provided by the PhysioNet/CinC challenge 2013. ECG signals are first segmented into contiguous frames of 250 ms duration and then labeled in six classes. Fetal segments are tagged according to the position of fQRS complex within each one. Next, segment features extraction and dimensionality reduction are obtained by applying principal component analysis on Haar-wavelet transform. After that, two sub-datasets are generated to separate representative segments from atypical ones. Imbalanced class problem is dealt by applying sampling without replacement on each sub-dataset. Finally, two SVMs are trained and cross-validated using the two balanced sub-datasets separately. Experimental results show that the proposed approach achieves high performance rates in fetal heartbeats detection that reach up to 90.95% of accuracy, 92.16% of sensitivity, 88.51% of specificity, 94.13% of positive predictive value and 84.96% of negative predictive value. A comparative study is also carried out to show the performance of other two machine learning algorithms for fQRS complex estimation, which are K-nearest neighborhood and Bayesian network.
A Manual Segmentation Tool for Three-Dimensional Neuron Datasets.
Magliaro, Chiara; Callara, Alejandro L; Vanello, Nicola; Ahluwalia, Arti
2017-01-01
To date, automated or semi-automated software and algorithms for segmentation of neurons from three-dimensional imaging datasets have had limited success. The gold standard for neural segmentation is considered to be the manual isolation performed by an expert. To facilitate the manual isolation of complex objects from image stacks, such as neurons in their native arrangement within the brain, a new Manual Segmentation Tool (ManSegTool) has been developed. ManSegTool allows user to load an image stack, scroll down the images and to manually draw the structures of interest stack-by-stack. Users can eliminate unwanted regions or split structures (i.e., branches from different neurons that are too close each other, but, to the experienced eye, clearly belong to a unique cell), to view the object in 3D and save the results obtained. The tool can be used for testing the performance of a single-neuron segmentation algorithm or to extract complex objects, where the available automated methods still fail. Here we describe the software's main features and then show an example of how ManSegTool can be used to segment neuron images acquired using a confocal microscope. In particular, expert neuroscientists were asked to segment different neurons from which morphometric variables were subsequently extracted as a benchmark for precision. In addition, a literature-defined index for evaluating the goodness of segmentation was used as a benchmark for accuracy. Neocortical layer axons from a DIADEM challenge dataset were also segmented with ManSegTool and compared with the manual "gold-standard" generated for the competition.
Multiscale CNNs for Brain Tumor Segmentation and Diagnosis.
Zhao, Liya; Jia, Kebin
2016-01-01
Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.
A Patch-Based Approach for the Segmentation of Pathologies: Application to Glioma Labelling.
Cordier, Nicolas; Delingette, Herve; Ayache, Nicholas
2016-04-01
In this paper, we describe a novel and generic approach to address fully-automatic segmentation of brain tumors by using multi-atlas patch-based voting techniques. In addition to avoiding the local search window assumption, the conventional patch-based framework is enhanced through several simple procedures: an improvement of the training dataset in terms of both label purity and intensity statistics, augmented features to implicitly guide the nearest-neighbor-search, multi-scale patches, invariance to cube isometries, stratification of the votes with respect to cases and labels. A probabilistic model automatically delineates regions of interest enclosing high-probability tumor volumes, which allows the algorithm to achieve highly competitive running time despite minimal processing power and resources. This method was evaluated on Multimodal Brain Tumor Image Segmentation challenge datasets. State-of-the-art results are achieved, with a limited learning stage thus restricting the risk of overfit. Moreover, segmentation smoothness does not involve any post-processing.
Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing
2018-06-01
Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Efficient Lane Boundary Detection with Spatial-Temporal Knowledge Filtering
Nan, Zhixiong; Wei, Ping; Xu, Linhai; Zheng, Nanning
2016-01-01
Lane boundary detection technology has progressed rapidly over the past few decades. However, many challenges that often lead to lane detection unavailability remain to be solved. In this paper, we propose a spatial-temporal knowledge filtering model to detect lane boundaries in videos. To address the challenges of structure variation, large noise and complex illumination, this model incorporates prior spatial-temporal knowledge with lane appearance features to jointly identify lane boundaries. The model first extracts line segments in video frames. Two novel filters—the Crossing Point Filter (CPF) and the Structure Triangle Filter (STF)—are proposed to filter out the noisy line segments. The two filters introduce spatial structure constraints and temporal location constraints into lane detection, which represent the spatial-temporal knowledge about lanes. A straight line or curve model determined by a state machine is used to fit the line segments to finally output the lane boundaries. We collected a challenging realistic traffic scene dataset. The experimental results on this dataset and other standard dataset demonstrate the strength of our method. The proposed method has been successfully applied to our autonomous experimental vehicle. PMID:27529248
Myocardial scar segmentation from magnetic resonance images using convolutional neural network
NASA Astrophysics Data System (ADS)
Zabihollahy, Fatemeh; White, James A.; Ukwatta, Eranga
2018-02-01
Accurate segmentation of the myocardial fibrosis or scar may provide important advancements for the prediction and management of malignant ventricular arrhythmias in patients with cardiovascular disease. In this paper, we propose a semi-automated method for segmentation of myocardial scar from late gadolinium enhancement magnetic resonance image (LGE-MRI) using a convolutional neural network (CNN). In contrast to image intensitybased methods, CNN-based algorithms have the potential to improve the accuracy of scar segmentation through the creation of high-level features from a combination of convolutional, detection and pooling layers. Our developed algorithm was trained using 2,336,703 image patches extracted from 420 slices of five 3D LGE-MR datasets, then validated on 2,204,178 patches from a testing dataset of seven 3D LGE-MR images including 624 slices, all obtained from patients with chronic myocardial infarction. For evaluation of the algorithm, we compared the algorithmgenerated segmentations to manual delineations by experts. Our CNN-based method reported an average Dice similarity coefficient (DSC), precision, and recall of 94.50 +/- 3.62%, 96.08 +/- 3.10%, and 93.96 +/- 3.75% as the accuracy of segmentation, respectively. As compared to several intensity threshold-based methods for scar segmentation, the results of our developed method have a greater agreement with manual expert segmentation.
A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.
Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip
2014-11-01
This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.
Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery
NASA Astrophysics Data System (ADS)
Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.
2016-12-01
Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.
Role of Gist and PHOG Features in Computer-Aided Diagnosis of Tuberculosis without Segmentation
Chauhan, Arun; Chauhan, Devesh; Rout, Chittaranjan
2014-01-01
Purpose Effective diagnosis of tuberculosis (TB) relies on accurate interpretation of radiological patterns found in a chest radiograph (CXR). Lack of skilled radiologists and other resources, especially in developing countries, hinders its efficient diagnosis. Computer-aided diagnosis (CAD) methods provide second opinion to the radiologists for their findings and thereby assist in better diagnosis of cancer and other diseases including TB. However, existing CAD methods for TB are based on the extraction of textural features from manually or semi-automatically segmented CXRs. These methods are prone to errors and cannot be implemented in X-ray machines for automated classification. Methods Gabor, Gist, histogram of oriented gradients (HOG), and pyramid histogram of oriented gradients (PHOG) features extracted from the whole image can be implemented into existing X-ray machines to discriminate between TB and non-TB CXRs in an automated manner. Localized features were extracted for the above methods using various parameters, such as frequency range, blocks and region of interest. The performance of these features was evaluated against textural features. Two digital CXR image datasets (8-bit DA and 14-bit DB) were used for evaluating the performance of these features. Results Gist (accuracy 94.2% for DA, 86.0% for DB) and PHOG (accuracy 92.3% for DA, 92.0% for DB) features provided better results for both the datasets. These features were implemented to develop a MATLAB toolbox, TB-Xpredict, which is freely available for academic use at http://sourceforge.net/projects/tbxpredict/. This toolbox provides both automated training and prediction modules and does not require expertise in image processing for operation. Conclusion Since the features used in TB-Xpredict do not require segmentation, the toolbox can easily be implemented in X-ray machines. This toolbox can effectively be used for the mass screening of TB in high-burden areas with improved efficiency. PMID:25390291
The life-cycle of upper-tropospheric jet streams identified with a novel data segmentation algorithm
NASA Astrophysics Data System (ADS)
Limbach, S.; Schömer, E.; Wernli, H.
2010-09-01
Jet streams are prominent features of the upper-tropospheric atmospheric flow. Through the thermal wind relationship these regions with intense horizontal wind speed (typically larger than 30 m/s) are associated with pronounced baroclinicity, i.e., with regions where extratropical cyclones develop due to baroclinic instability processes. Individual jet streams are non-stationary elongated features that can extend over more than 2000 km in the along-flow and 200-500 km in the across-flow direction, respectively. Their lifetime can vary between a few days and several weeks. In recent years, feature-based algorithms have been developed that allow compiling synoptic climatologies and typologies of upper-tropospheric jet streams based upon objective selection criteria and climatological reanalysis datasets. In this study a novel algorithm to efficiently identify jet streams using an extended region-growing segmentation approach is introduced. This algorithm iterates over a 4-dimensional field of horizontal wind speed from ECMWF analyses and decides at each grid point whether all prerequisites for a jet stream are met. In a single pass the algorithm keeps track of all adjacencies of these grid points and creates the 4-dimensional connected segments associated with each jet stream. In addition to the detection of these sets of connected grid points, the algorithm analyzes the development over time of the distinct 3-dimensional features each segment consists of. Important events in the development of these features, for example mergings and splittings, are detected and analyzed on a per-grid-point and per-feature basis. The output of the algorithm consists of the actual sets of grid-points augmented with information about the particular events, and of the so-called event graphs, which are an abstract representation of the distinct 3-dimensional features and events of each segment. This technique provides comprehensive information about the frequency of upper-tropospheric jet streams, their preferred regions of genesis, merging, splitting, and lysis, and statistical information about their size, amplitude and lifetime. The presentation will introduce the technique, provide example visualizations of the time evolution of the identified 3-dimensional jet stream features, and present results from a first multi-month "climatology" of upper-tropospheric jets. In the future, the technique can be applied to longer datasets, for instance reanalyses and output from global climate model simulations - and provide detailed information about key characteristics of jet stream life cycles.
Soltaninejad, Mohammadreza; Yang, Guang; Lambrou, Tryphon; Allinson, Nigel; Jones, Timothy L; Barrick, Thomas R; Howe, Franklyn A; Ye, Xujiong
2017-02-01
We propose a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from Fluid- Attenuated Inversion Recovery (FLAIR) Magnetic Resonance Imaging (MRI). The method is based on superpixel technique and classification of each superpixel. A number of novel image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomized trees (ERT) classifier is compared with support vector machine (SVM) to classify each superpixel into tumour and non-tumour. The proposed method is evaluated on two datasets: (1) Our own clinical dataset: 19 MRI FLAIR images of patients with gliomas of grade II to IV, and (2) BRATS 2012 dataset: 30 FLAIR images with 10 low-grade and 20 high-grade gliomas. The experimental results demonstrate the high detection and segmentation performance of the proposed method using ERT classifier. For our own cohort, the average detection sensitivity, balanced error rate and the Dice overlap measure for the segmented tumour against the ground truth are 89.48 %, 6 % and 0.91, respectively, while, for the BRATS dataset, the corresponding evaluation results are 88.09 %, 6 % and 0.88, respectively. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management.
Zagoris, Konstantinos; Pratikakis, Ioannis; Gatos, Basilis
2017-05-03
Word spotting strategies employed in historical handwritten documents face many challenges due to variation in the writing style and intense degradation. In this paper, a new method that permits effective word spotting in handwritten documents is presented that it relies upon document-oriented local features which take into account information around representative keypoints as well a matching process that incorporates spatial context in a local proximity search without using any training data. Experimental results on four historical handwritten datasets for two different scenarios (segmentation-based and segmentation-free) using standard evaluation measures show the improved performance achieved by the proposed methodology.
Mizianty, Marcin J; Kurgan, Lukasz
2009-12-13
Knowledge of structural class is used by numerous methods for identification of structural/functional characteristics of proteins and could be used for the detection of remote homologues, particularly for chains that share twilight-zone similarity. In contrast to existing sequence-based structural class predictors, which target four major classes and which are designed for high identity sequences, we predict seven classes from sequences that share twilight-zone identity with the training sequences. The proposed MODular Approach to Structural class prediction (MODAS) method is unique as it allows for selection of any subset of the classes. MODAS is also the first to utilize a novel, custom-built feature-based sequence representation that combines evolutionary profiles and predicted secondary structure. The features quantify information relevant to the definition of the classes including conservation of residues and arrangement and number of helix/strand segments. Our comprehensive design considers 8 feature selection methods and 4 classifiers to develop Support Vector Machine-based classifiers that are tailored for each of the seven classes. Tests on 5 twilight-zone and 1 high-similarity benchmark datasets and comparison with over two dozens of modern competing predictors show that MODAS provides the best overall accuracy that ranges between 80% and 96.7% (83.5% for the twilight-zone datasets), depending on the dataset. This translates into 19% and 8% error rate reduction when compared against the best performing competing method on two largest datasets. The proposed predictor provides accurate predictions at 58% accuracy for membrane proteins class, which is not considered by majority of existing methods, in spite that this class accounts for only 2% of the data. Our predictive model is analyzed to demonstrate how and why the input features are associated with the corresponding classes. The improved predictions stem from the novel features that express collocation of the secondary structure segments in the protein sequence and that combine evolutionary and secondary structure information. Our work demonstrates that conservation and arrangement of the secondary structure segments predicted along the protein chain can successfully predict structural classes which are defined based on the spatial arrangement of the secondary structures. A web server is available at http://biomine.ece.ualberta.ca/MODAS/.
2009-01-01
Background Knowledge of structural class is used by numerous methods for identification of structural/functional characteristics of proteins and could be used for the detection of remote homologues, particularly for chains that share twilight-zone similarity. In contrast to existing sequence-based structural class predictors, which target four major classes and which are designed for high identity sequences, we predict seven classes from sequences that share twilight-zone identity with the training sequences. Results The proposed MODular Approach to Structural class prediction (MODAS) method is unique as it allows for selection of any subset of the classes. MODAS is also the first to utilize a novel, custom-built feature-based sequence representation that combines evolutionary profiles and predicted secondary structure. The features quantify information relevant to the definition of the classes including conservation of residues and arrangement and number of helix/strand segments. Our comprehensive design considers 8 feature selection methods and 4 classifiers to develop Support Vector Machine-based classifiers that are tailored for each of the seven classes. Tests on 5 twilight-zone and 1 high-similarity benchmark datasets and comparison with over two dozens of modern competing predictors show that MODAS provides the best overall accuracy that ranges between 80% and 96.7% (83.5% for the twilight-zone datasets), depending on the dataset. This translates into 19% and 8% error rate reduction when compared against the best performing competing method on two largest datasets. The proposed predictor provides accurate predictions at 58% accuracy for membrane proteins class, which is not considered by majority of existing methods, in spite that this class accounts for only 2% of the data. Our predictive model is analyzed to demonstrate how and why the input features are associated with the corresponding classes. Conclusions The improved predictions stem from the novel features that express collocation of the secondary structure segments in the protein sequence and that combine evolutionary and secondary structure information. Our work demonstrates that conservation and arrangement of the secondary structure segments predicted along the protein chain can successfully predict structural classes which are defined based on the spatial arrangement of the secondary structures. A web server is available at http://biomine.ece.ualberta.ca/MODAS/. PMID:20003388
Contextually guided very-high-resolution imagery classification with semantic segments
NASA Astrophysics Data System (ADS)
Zhao, Wenzhi; Du, Shihong; Wang, Qiao; Emery, William J.
2017-10-01
Contextual information, revealing relationships and dependencies between image objects, is one of the most important information for the successful interpretation of very-high-resolution (VHR) remote sensing imagery. Over the last decade, geographic object-based image analysis (GEOBIA) technique has been widely used to first divide images into homogeneous parts, and then to assign semantic labels according to the properties of image segments. However, due to the complexity and heterogeneity of VHR images, segments without semantic labels (i.e., semantic-free segments) generated with low-level features often fail to represent geographic entities (such as building roofs usually be partitioned into chimney/antenna/shadow parts). As a result, it is hard to capture contextual information across geographic entities when using semantic-free segments. In contrast to low-level features, "deep" features can be used to build robust segments with accurate labels (i.e., semantic segments) in order to represent geographic entities at higher levels. Based on these semantic segments, semantic graphs can be constructed to capture contextual information in VHR images. In this paper, semantic segments were first explored with convolutional neural networks (CNN) and a conditional random field (CRF) model was then applied to model the contextual information between semantic segments. Experimental results on two challenging VHR datasets (i.e., the Vaihingen and Beijing scenes) indicate that the proposed method is an improvement over existing image classification techniques in classification performance (overall accuracy ranges from 82% to 96%).
Exudate-based diabetic macular edema detection in fundus images using publicly available datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul
2011-01-01
Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME through the presence of exudation. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME.more » This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing (e.g., the classifier was trained on an independent dataset and tested on MESSIDOR). Our algorithm obtained an AUC between 0.88 and 0.94 depending on the dataset/features used. Additionally, it does not need ground truth at lesion level to reject false positives and is computationally efficient, as it generates a diagnosis on an average of 4.4 s (9.3 s, considering the optic nerve localization) per image on an 2.6 GHz platform with an unoptimized Matlab implementation.« less
A fusion network for semantic segmentation using RGB-D data
NASA Astrophysics Data System (ADS)
Yuan, Jiahui; Zhang, Kun; Xia, Yifan; Qi, Lin; Dong, Junyu
2018-04-01
Semantic scene parsing is considerable in many intelligent field, including perceptual robotics. For the past few years, pixel-wise prediction tasks like semantic segmentation with RGB images has been extensively studied and has reached very remarkable parsing levels, thanks to convolutional neural networks (CNNs) and large scene datasets. With the development of stereo cameras and RGBD sensors, it is expected that additional depth information will help improving accuracy. In this paper, we propose a semantic segmentation framework incorporating RGB and complementary depth information. Motivated by the success of fully convolutional networks (FCN) in semantic segmentation field, we design a fully convolutional networks consists of two branches which extract features from both RGB and depth data simultaneously and fuse them as the network goes deeper. Instead of aggregating multiple model, our goal is to utilize RGB data and depth data more effectively in a single model. We evaluate our approach on the NYU-Depth V2 dataset, which consists of 1449 cluttered indoor scenes, and achieve competitive results with the state-of-the-art methods.
Incipient fault feature extraction of rolling bearings based on the MVMD and Teager energy operator.
Ma, Jun; Wu, Jiande; Wang, Xiaodong
2018-06-04
Aiming at the problems that the incipient fault of rolling bearings is difficult to recognize and the number of intrinsic mode functions (IMFs) decomposed by variational mode decomposition (VMD) must be set in advance and can not be adaptively selected, taking full advantages of the adaptive segmentation of scale spectrum and Teager energy operator (TEO) demodulation, a new method for early fault feature extraction of rolling bearings based on the modified VMD and Teager energy operator (MVMD-TEO) is proposed. Firstly, the vibration signal of rolling bearings is analyzed by adaptive scale space spectrum segmentation to obtain the spectrum segmentation support boundary, and then the number K of IMFs decomposed by VMD is adaptively determined. Secondly, the original vibration signal is adaptively decomposed into K IMFs, and the effective IMF components are extracted based on the correlation coefficient criterion. Finally, the Teager energy spectrum of the reconstructed signal of the effective IMF components is calculated by the TEO, and then the early fault features of rolling bearings are extracted to realize the fault identification and location. Comparative experiments of the proposed method and the existing fault feature extraction method based on Local Mean Decomposition and Teager energy operator (LMD-TEO) have been implemented using experimental data-sets and a measured data-set. The results of comparative experiments in three application cases show that the presented method can achieve a fairly or slightly better performance than LMD-TEO method, and the validity and feasibility of the proposed method are proved. Copyright © 2018. Published by Elsevier Ltd.
Blood vessel segmentation in color fundus images based on regional and Hessian features.
Shah, Syed Ayaz Ali; Tang, Tong Boon; Faye, Ibrahima; Laude, Augustinus
2017-08-01
To propose a new algorithm of blood vessel segmentation based on regional and Hessian features for image analysis in retinal abnormality diagnosis. Firstly, color fundus images from the publicly available database DRIVE were converted from RGB to grayscale. To enhance the contrast of the dark objects (blood vessels) against the background, the dot product of the grayscale image with itself was generated. To rectify the variation in contrast, we used a 5 × 5 window filter on each pixel. Based on 5 regional features, 1 intensity feature and 2 Hessian features per scale using 9 scales, we extracted a total of 24 features. A linear minimum squared error (LMSE) classifier was trained to classify each pixel into a vessel or non-vessel pixel. The DRIVE dataset provided 20 training and 20 test color fundus images. The proposed algorithm achieves a sensitivity of 72.05% with 94.79% accuracy. Our proposed algorithm achieved higher accuracy (0.9206) at the peripapillary region, where the ocular manifestations in the microvasculature due to glaucoma, central retinal vein occlusion, etc. are most obvious. This supports the proposed algorithm as a strong candidate for automated vessel segmentation.
Pulmonary embolism detection using localized vessel-based features in dual energy CT
NASA Astrophysics Data System (ADS)
Dicente Cid, Yashin; Depeursinge, Adrien; Foncubierta Rodríguez, Antonio; Platon, Alexandra; Poletti, Pierre-Alexandre; Müller, Henning
2015-03-01
Pulmonary embolism (PE) affects up to 600,000 patients and contributes to at least 100,000 deaths every year in the United States alone. Diagnosis of PE can be difficult as most symptoms are unspecific and early diagnosis is essential for successful treatment. Computed Tomography (CT) images can show morphological anomalies that suggest the existence of PE. Various image-based procedures have been proposed for improving computer-aided diagnosis of PE. We propose a novel method for detecting PE based on localized vessel-based features computed in Dual Energy CT (DECT) images. DECT provides 4D data indexed by the three spatial coordinates and the energy level. The proposed features encode the variation of the Hounsfield Units across the different levels and the CT attenuation related to the amount of iodine contrast in each vessel. A local classification of the vessels is obtained through the classification of these features. Moreover, the localization of the vessel in the lung provides better comparison between patients. Results show that the simple features designed are able to classify pulmonary embolism patients with an AUC (area under the receiver operating curve) of 0.71 on a lobe basis. Prior segmentation of the lung lobes is not necessary because an automatic atlas-based segmentation obtains similar AUC levels (0.65) for the same dataset. The automatic atlas reaches 0.80 AUC in a larger dataset with more control cases.
Multi-scale Gaussian representation and outline-learning based cell image segmentation.
Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli
2013-01-01
High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.
Multi-scale Gaussian representation and outline-learning based cell image segmentation
2013-01-01
Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488
Liang, Yunyun; Liu, Sanyang; Zhang, Shengli
2015-01-01
Prediction of protein structural classes for low-similarity sequences is useful for understanding fold patterns, regulation, functions, and interactions of proteins. It is well known that feature extraction is significant to prediction of protein structural class and it mainly uses protein primary sequence, predicted secondary structure sequence, and position-specific scoring matrix (PSSM). Currently, prediction solely based on the PSSM has played a key role in improving the prediction accuracy. In this paper, we propose a novel method called CSP-SegPseP-SegACP by fusing consensus sequence (CS), segmented PsePSSM, and segmented autocovariance transformation (ACT) based on PSSM. Three widely used low-similarity datasets (1189, 25PDB, and 640) are adopted in this paper. Then a 700-dimensional (700D) feature vector is constructed and the dimension is decreased to 224D by using principal component analysis (PCA). To verify the performance of our method, rigorous jackknife cross-validation tests are performed on 1189, 25PDB, and 640 datasets. Comparison of our results with the existing PSSM-based methods demonstrates that our method achieves the favorable and competitive performance. This will offer an important complementary to other PSSM-based methods for prediction of protein structural classes for low-similarity sequences.
Lymphoma diagnosis in histopathology using a multi-stage visual learning approach
NASA Astrophysics Data System (ADS)
Codella, Noel; Moradi, Mehdi; Matasar, Matt; Sveda-Mahmood, Tanveer; Smith, John R.
2016-03-01
This work evaluates the performance of a multi-stage image enhancement, segmentation, and classification approach for lymphoma recognition in hematoxylin and eosin (H and E) stained histopathology slides of excised human lymph node tissue. In the first stage, the original histology slide undergoes various image enhancement and segmentation operations, creating an additional 5 images for every slide. These new images emphasize unique aspects of the original slide, including dominant staining, staining segmentations, non-cellular groupings, and cellular groupings. For the resulting 6 total images, a collection of visual features are extracted from 3 different spatial configurations. Visual features include the first fully connected layer (4096 dimensions) of the Caffe convolutional neural network trained from ImageNet data. In total, over 200 resultant visual descriptors are extracted for each slide. Non-linear SVMs are trained over each of the over 200 descriptors, which are then input to a forward stepwise ensemble selection that optimizes a late fusion sum of logistically normalized model outputs using local hill climbing. The approach is evaluated on a public NIH dataset containing 374 images representing 3 lymphoma conditions: chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL). Results demonstrate a 38.4% reduction in residual error over the current state-of-art on this dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.
2014-08-26
Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.
Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H
2017-09-01
Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach recently developed in our laboratory. The proposed TifMA algorithm consistently provided higher detection rates than the other three methods, with accuracies greater than 95% for all data. Moreover, our algorithm was able to pinpoint the start and end times of the MNA with an error of less than 1 s in duration, whereas the next-best algorithm had a detection error of more than 2.2 s. The final, most challenging, dataset was collected to verify the performance of the algorithm in discriminating between corrupted data that were usable for accurate HR estimations and data that were nonusable. It was found that on average 48% of the data segments were found to have MNA, and of these, 38% could be used to provide reliable HR estimation.
SU-C-207B-04: Automated Segmentation of Pectoral Muscle in MR Images of Dense Breasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verburg, E; Waard, SN de; Veldhuis, WB
Purpose: To develop and evaluate a fully automated method for segmentation of the pectoral muscle boundary in Magnetic Resonance Imaging (MRI) of dense breasts. Methods: Segmentation of the pectoral muscle is an important part of automatic breast image analysis methods. Current methods for segmenting the pectoral muscle in breast MRI have difficulties delineating the muscle border correctly in breasts with a large proportion of fibroglandular tissue (i.e., dense breasts). Hence, an automated method based on dynamic programming was developed, incorporating heuristics aimed at shape, location and gradient features.To assess the method, the pectoral muscle was segmented in 91 randomly selectedmore » participants (mean age 56.6 years, range 49.5–75.2 years) from a large MRI screening trial in women with dense breasts (ACR BI-RADS category 4). Each MR dataset consisted of 178 or 179 T1-weighted images with voxel size 0.64 × 0.64 × 1.00 mm3. All images (n=16,287) were reviewed and scored by a radiologist. In contrast to volume overlap coefficients, such as DICE, the radiologist detected deviations in the segmented muscle border and determined whether the result would impact the ability to accurately determine the volume of fibroglandular tissue and detection of breast lesions. Results: According to the radiologist’s scores, 95.5% of the slices did not mask breast tissue in such way that it could affect detection of breast lesions or volume measurements. In 13.1% of the slices a deviation in the segmented muscle border was present which would not impact breast lesion detection. In 70 datasets (78%) at least 95% of the slices were segmented in such a way it would not affect detection of breast lesions, and in 60 (66%) datasets this was 100%. Conclusion: Dynamic programming with dedicated heuristics shows promising potential to segment the pectoral muscle in women with dense breasts.« less
Fast and robust segmentation of white blood cell images by self-supervised learning.
Zheng, Xin; Wang, Yong; Wang, Guoyou; Liu, Jianguo
2018-04-01
A fast and accurate white blood cell (WBC) segmentation remains a challenging task, as different WBCs vary significantly in color and shape due to cell type differences, staining technique variations and the adhesion between the WBC and red blood cells. In this paper, a self-supervised learning approach, consisting of unsupervised initial segmentation and supervised segmentation refinement, is presented. The first module extracts the overall foreground region from the cell image by K-means clustering, and then generates a coarse WBC region by touching-cell splitting based on concavity analysis. The second module further uses the coarse segmentation result of the first module as automatic labels to actively train a support vector machine (SVM) classifier. Then, the trained SVM classifier is further used to classify each pixel of the image and achieve a more accurate segmentation result. To improve its segmentation accuracy, median color features representing the topological structure and a new weak edge enhancement operator (WEEO) handling fuzzy boundary are introduced. To further reduce its time cost, an efficient cluster sampling strategy is also proposed. We tested the proposed approach with two blood cell image datasets obtained under various imaging and staining conditions. The experiment results show that our approach has a superior performance of accuracy and time cost on both datasets. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zargari Khuzani, Abolfazl; Danala, Gopichandh; Heidari, Morteza; Du, Yue; Mashhadi, Najmeh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Higher recall rates are a major challenge in mammography screening. Thus, developing computer-aided diagnosis (CAD) scheme to classify between malignant and benign breast lesions can play an important role to improve efficacy of mammography screening. Objective of this study is to develop and test a unique image feature fusion framework to improve performance in classifying suspicious mass-like breast lesions depicting on mammograms. The image dataset consists of 302 suspicious masses detected on both craniocaudal and mediolateral-oblique view images. Amongst them, 151 were malignant and 151 were benign. The study consists of following 3 image processing and feature analysis steps. First, an adaptive region growing segmentation algorithm was used to automatically segment mass regions. Second, a set of 70 image features related to spatial and frequency characteristics of mass regions were initially computed. Third, a generalized linear regression model (GLM) based machine learning classifier combined with a bat optimization algorithm was used to optimally fuse the selected image features based on predefined assessment performance index. An area under ROC curve (AUC) with was used as a performance assessment index. Applying CAD scheme to the testing dataset, AUC was 0.75+/-0.04, which was significantly higher than using a single best feature (AUC=0.69+/-0.05) or the classifier with equally weighted features (AUC=0.73+/-0.05). This study demonstrated that comparing to the conventional equal-weighted approach, using an unequal-weighted feature fusion approach had potential to significantly improve accuracy in classifying between malignant and benign breast masses.
OpenCL based machine learning labeling of biomedical datasets
NASA Astrophysics Data System (ADS)
Amoros, Oscar; Escalera, Sergio; Puig, Anna
2011-03-01
In this paper, we propose a two-stage labeling method of large biomedical datasets through a parallel approach in a single GPU. Diagnostic methods, structures volume measurements, and visualization systems are of major importance for surgery planning, intra-operative imaging and image-guided surgery. In all cases, to provide an automatic and interactive method to label or to tag different structures contained into input data becomes imperative. Several approaches to label or segment biomedical datasets has been proposed to discriminate different anatomical structures in an output tagged dataset. Among existing methods, supervised learning methods for segmentation have been devised to easily analyze biomedical datasets by a non-expert user. However, they still have some problems concerning practical application, such as slow learning and testing speeds. In addition, recent technological developments have led to widespread availability of multi-core CPUs and GPUs, as well as new software languages, such as NVIDIA's CUDA and OpenCL, allowing to apply parallel programming paradigms in conventional personal computers. Adaboost classifier is one of the most widely applied methods for labeling in the Machine Learning community. In a first stage, Adaboost trains a binary classifier from a set of pre-labeled samples described by a set of features. This binary classifier is defined as a weighted combination of weak classifiers. Each weak classifier is a simple decision function estimated on a single feature value. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. In this work, we propose an alternative representation of the Adaboost binary classifier. We use this proposed representation to define a new GPU-based parallelized Adaboost testing stage using OpenCL. We provide numerical experiments based on large available data sets and we compare our results to CPU-based strategies in terms of time and labeling speeds.
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Network-level accident-mapping: Distance based pattern matching using artificial neural network.
Deka, Lipika; Quddus, Mohammed
2014-04-01
The objective of an accident-mapping algorithm is to snap traffic accidents onto the correct road segments. Assigning accidents onto the correct segments facilitate to robustly carry out some key analyses in accident research including the identification of accident hot-spots, network-level risk mapping and segment-level accident risk modelling. Existing risk mapping algorithms have some severe limitations: (i) they are not easily 'transferable' as the algorithms are specific to given accident datasets; (ii) they do not perform well in all road-network environments such as in areas of dense road network; and (iii) the methods used do not perform well in addressing inaccuracies inherent in and type of road environment. The purpose of this paper is to develop a new accident mapping algorithm based on the common variables observed in most accident databases (e.g. road name and type, direction of vehicle movement before the accident and recorded accident location). The challenges here are to: (i) develop a method that takes into account uncertainties inherent to the recorded traffic accident data and the underlying digital road network data, (ii) accurately determine the type and proportion of inaccuracies, and (iii) develop a robust algorithm that can be adapted for any accident set and road network of varying complexity. In order to overcome these challenges, a distance based pattern-matching approach is used to identify the correct road segment. This is based on vectors containing feature values that are common in the accident data and the network data. Since each feature does not contribute equally towards the identification of the correct road segments, an ANN approach using the single-layer perceptron is used to assist in "learning" the relative importance of each feature in the distance calculation and hence the correct link identification. The performance of the developed algorithm was evaluated based on a reference accident dataset from the UK confirming that the accuracy is much better than other methods. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Liao, Xiaolei; Zhao, Juanjuan; Jiao, Cheng; Lei, Lei; Qiang, Yan; Cui, Qiang
2016-01-01
Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences. PMID:27532214
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-10
... methods help solve imaging problems such as image ``leakage,'' which causes distortion, overloads datasets... enhance detection. This is helpful to identify harmful features such as precancerous polyps or other anomalies. The field of use may be limited to ``computer aided detection in colonography.'' The prospective...
Kim, Jongin; Park, Hyeong-jun
2016-01-01
The purpose of this study is to classify EEG data on imagined speech in a single trial. We recorded EEG data while five subjects imagined different vowels, /a/, /e/, /i/, /o/, and /u/. We divided each single trial dataset into thirty segments and extracted features (mean, variance, standard deviation, and skewness) from all segments. To reduce the dimension of the feature vector, we applied a feature selection algorithm based on the sparse regression model. These features were classified using a support vector machine with a radial basis function kernel, an extreme learning machine, and two variants of an extreme learning machine with different kernels. Because each single trial consisted of thirty segments, our algorithm decided the label of the single trial by selecting the most frequent output among the outputs of the thirty segments. As a result, we observed that the extreme learning machine and its variants achieved better classification rates than the support vector machine with a radial basis function kernel and linear discrimination analysis. Thus, our results suggested that EEG responses to imagined speech could be successfully classified in a single trial using an extreme learning machine with a radial basis function and linear kernel. This study with classification of imagined speech might contribute to the development of silent speech BCI systems. PMID:28097128
Guo, Yanrong; Gao, Yaozong; Shao, Yeqin; Price, True; Oto, Aytekin; Shen, Dinggang
2014-01-01
Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. Results: The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. Conclusions: A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images. PMID:24989402
Guo, Yanrong; Gao, Yaozong; Shao, Yeqin; Price, True; Oto, Aytekin; Shen, Dinggang
2014-07-01
Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images.
Qazi, Arish A; Pekar, Vladimir; Kim, John; Xie, Jason; Breen, Stephen L; Jaffray, David A
2011-11-01
Intensity modulated radiation therapy (IMRT) allows greater control over dose distribution, which leads to a decrease in radiation related toxicity. IMRT, however, requires precise and accurate delineation of the organs at risk and target volumes. Manual delineation is tedious and suffers from both interobserver and intraobserver variability. State of the art auto-segmentation methods are either atlas-based, model-based or hybrid however, robust fully automated segmentation is often difficult due to the insufficient discriminative information provided by standard medical imaging modalities for certain tissue types. In this paper, the authors present a fully automated hybrid approach which combines deformable registration with the model-based approach to accurately segment normal and target tissues from head and neck CT images. The segmentation process starts by using an average atlas to reliably identify salient landmarks in the patient image. The relationship between these landmarks and the reference dataset serves to guide a deformable registration algorithm, which allows for a close initialization of a set of organ-specific deformable models in the patient image, ensuring their robust adaptation to the boundaries of the structures. Finally, the models are automatically fine adjusted by our boundary refinement approach which attempts to model the uncertainty in model adaptation using a probabilistic mask. This uncertainty is subsequently resolved by voxel classification based on local low-level organ-specific features. To quantitatively evaluate the method, they auto-segment several organs at risk and target tissues from 10 head and neck CT images. They compare the segmentations to the manual delineations outlined by the expert. The evaluation is carried out by estimating two common quantitative measures on 10 datasets: volume overlap fraction or the Dice similarity coefficient (DSC), and a geometrical metric, the median symmetric Hausdorff distance (HD), which is evaluated slice-wise. They achieve an average overlap of 93% for the mandible, 91% for the brainstem, 83% for the parotids, 83% for the submandibular glands, and 74% for the lymph node levels. Our automated segmentation framework is able to segment anatomy in the head and neck region with high accuracy within a clinically-acceptable segmentation time.
NASA Astrophysics Data System (ADS)
Alshehhi, Rasha; Marpu, Prashanth Reddy
2017-04-01
Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.
3D Feature Extraction for Unstructured Grids
NASA Technical Reports Server (NTRS)
Silver, Deborah
1996-01-01
Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rios Velazquez, E; Meier, R; Dunn, W
Purpose: Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. Methods: MRI sets of 67 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA), including necrosis, edema, contrast enhancing and non-enhancing tumor. Spearman’s correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Results: Auto-segmented sub-volumes showedmore » high agreement with manually delineated volumes (range (r): 0.65 – 0.91). Also showed higher correlation with VASARI features (auto r = 0.35, 0.60 and 0.59; manual r = 0.29, 0.50, 0.43, for contrast-enhancing, necrosis and edema, respectively). The contrast-enhancing volume and post-contrast abnormal volume showed the highest C-index (0.73 and 0.72), comparable to manually defined volumes (p = 0.22 and p = 0.07, respectively). The non-enhancing region defined by BraTumIA showed a significantly higher prognostic value (CI = 0.71) than the edema (CI = 0.60), both of which could not be distinguished by manual delineation. Conclusion: BraTumIA tumor sub-compartments showed higher correlation with VASARI data, and equivalent performance in terms of prognosis compared to manual sub-volumes. This method can enable more reproducible definition and quantification of imaging based biomarkers and has a large potential in high-throughput medical imaging research.« less
Truong, Nhan Duy; Nguyen, Anh Duy; Kuhlmann, Levin; Bonyadi, Mohammad Reza; Yang, Jiawei; Ippolito, Samuel; Kavehei, Omid
2018-05-07
Seizure prediction has attracted growing attention as one of the most challenging predictive data analysis efforts to improve the life of patients with drug-resistant epilepsy and tonic seizures. Many outstanding studies have reported great results in providing sensible indirect (warning systems) or direct (interactive neural stimulation) control over refractory seizures, some of which achieved high performance. However, to achieve high sensitivity and a low false prediction rate, many of these studies relied on handcraft feature extraction and/or tailored feature extraction, which is performed for each patient independently. This approach, however, is not generalizable, and requires significant modifications for each new patient within a new dataset. In this article, we apply convolutional neural networks to different intracranial and scalp electroencephalogram (EEG) datasets and propose a generalized retrospective and patient-specific seizure prediction method. We use the short-time Fourier transform on 30-s EEG windows to extract information in both the frequency domain and the time domain. The algorithm automatically generates optimized features for each patient to best classify preictal and interictal segments. The method can be applied to any other patient from any dataset without the need for manual feature extraction. The proposed approach achieves sensitivity of 81.4%, 81.2%, and 75% and a false prediction rate of 0.06/h, 0.16/h, and 0.21/h on the Freiburg Hospital intracranial EEG dataset, the Boston Children's Hospital-MIT scalp EEG dataset, and the American Epilepsy Society Seizure Prediction Challenge dataset, respectively. Our prediction method is also statistically better than an unspecific random predictor for most of the patients in all three datasets. Copyright © 2018 Elsevier Ltd. All rights reserved.
Distance-based over-segmentation for single-frame RGB-D images
NASA Astrophysics Data System (ADS)
Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao
2017-11-01
Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.
Defect Detection and Segmentation Framework for Remote Field Eddy Current Sensor Data
2017-01-01
Remote-Field Eddy-Current (RFEC) technology is often used as a Non-Destructive Evaluation (NDE) method to prevent water pipe failures. By analyzing the RFEC data, it is possible to quantify the corrosion present in pipes. Quantifying the corrosion involves detecting defects and extracting their depth and shape. For large sections of pipelines, this can be extremely time-consuming if performed manually. Automated approaches are therefore well motivated. In this article, we propose an automated framework to locate and segment defects in individual pipe segments, starting from raw RFEC measurements taken over large pipelines. The framework relies on a novel feature to robustly detect these defects and a segmentation algorithm applied to the deconvolved RFEC signal. The framework is evaluated using both simulated and real datasets, demonstrating its ability to efficiently segment the shape of corrosion defects. PMID:28984823
Song, Yang; Cai, Weidong; Feng, David Dagan; Chen, Mei
2013-01-01
Automated segmentation of cell nuclei in microscopic images is critical to high throughput analysis of the ever increasing amount of data. Although cell nuclei are generally visually distinguishable for human, automated segmentation faces challenges when there is significant intensity inhomogeneity among cell nuclei or in the background. In this paper, we propose an effective method for automated cell nucleus segmentation using a three-step approach. It first obtains an initial segmentation by extracting salient regions in the image, then reduces false positives using inter-region feature discrimination, and finally refines the boundary of the cell nuclei using intra-region contrast information. This method has been evaluated on two publicly available datasets of fluorescence microscopic images with 4009 cells, and has achieved superior performance compared to popular state of the art methods using established metrics.
Defect Detection and Segmentation Framework for Remote Field Eddy Current Sensor Data.
Falque, Raphael; Vidal-Calleja, Teresa; Miro, Jaime Valls
2017-10-06
Remote-Field Eddy-Current (RFEC) technology is often used as a Non-Destructive Evaluation (NDE) method to prevent water pipe failures. By analyzing the RFEC data, it is possible to quantify the corrosion present in pipes. Quantifying the corrosion involves detecting defects and extracting their depth and shape. For large sections of pipelines, this can be extremely time-consuming if performed manually. Automated approaches are therefore well motivated. In this article, we propose an automated framework to locate and segment defects in individual pipe segments, starting from raw RFEC measurements taken over large pipelines. The framework relies on a novel feature to robustly detect these defects and a segmentation algorithm applied to the deconvolved RFEC signal. The framework is evaluated using both simulated and real datasets, demonstrating its ability to efficiently segment the shape of corrosion defects.
Speculation detection for Chinese clinical notes: Impacts of word segmentation and embedding models.
Zhang, Shaodian; Kang, Tian; Zhang, Xingting; Wen, Dong; Elhadad, Noémie; Lei, Jianbo
2016-04-01
Speculations represent uncertainty toward certain facts. In clinical texts, identifying speculations is a critical step of natural language processing (NLP). While it is a nontrivial task in many languages, detecting speculations in Chinese clinical notes can be particularly challenging because word segmentation may be necessary as an upstream operation. The objective of this paper is to construct a state-of-the-art speculation detection system for Chinese clinical notes and to investigate whether embedding features and word segmentations are worth exploiting toward this overall task. We propose a sequence labeling based system for speculation detection, which relies on features from bag of characters, bag of words, character embedding, and word embedding. We experiment on a novel dataset of 36,828 clinical notes with 5103 gold-standard speculation annotations on 2000 notes, and compare the systems in which word embeddings are calculated based on word segmentations given by general and by domain specific segmenters respectively. Our systems are able to reach performance as high as 92.2% measured by F score. We demonstrate that word segmentation is critical to produce high quality word embedding to facilitate downstream information extraction applications, and suggest that a domain dependent word segmenter can be vital to such a clinical NLP task in Chinese language. Copyright © 2016 Elsevier Inc. All rights reserved.
Morariu, Cosmin Adrian; Terheiden, Tobias; Dohle, Daniel Sebastian; Tsagakis, Konstantinos; Pauli, Josef
2016-02-01
Our goal is to provide precise measurements of the aortic dimensions in case of dissection pathologies. Quantification of surface lengths and aortic radii/diameters together with the visualization of the dissection membrane represents crucial prerequisites for enabling minimally invasive treatment of type A dissections, which always also imply the ascending aorta. We seek a measure invariant to luminance and contrast for aortic outer wall segmentation. Therefore, we propose a 2D graph-based approach using phase congruency combined with additional features. Phase congruency is extended to 3D by designing a novel conic directional filter and adding a lowpass component to the 3D Log-Gabor filterbank for extracting the fine dissection membrane, which separates the true lumen from the false one within the aorta. The result of the outer wall segmentation is compared with manually annotated axial slices belonging to 11 CTA datasets. Quantitative assessment of our novel 2D/3D membrane extraction algorithms has been obtained for 10 datasets and reveals subvoxel accuracy in all cases. Aortic inner and outer surface lengths, determined within 2 cadaveric CT datasets, are validated against manual measurements performed by a vascular surgeon on excised aortas of the body donors. This contribution proposes a complete pipeline for segmentation and quantification of aortic dissections. Validation against ground truth of the 3D contour lengths quantification represents a significant step toward custom-designed stent-grafts.
Two novel motion-based algorithms for surveillance video analysis on embedded platforms
NASA Astrophysics Data System (ADS)
Vijverberg, Julien A.; Loomans, Marijn J. H.; Koeleman, Cornelis J.; de With, Peter H. N.
2010-05-01
This paper proposes two novel motion-vector based techniques for target detection and target tracking in surveillance videos. The algorithms are designed to operate on a resource-constrained device, such as a surveillance camera, and to reuse the motion vectors generated by the video encoder. The first novel algorithm for target detection uses motion vectors to construct a consistent motion mask, which is combined with a simple background segmentation technique to obtain a segmentation mask. The second proposed algorithm aims at multi-target tracking and uses motion vectors to assign blocks to targets employing five features. The weights of these features are adapted based on the interaction between targets. These algorithms are combined in one complete analysis application. The performance of this application for target detection has been evaluated for the i-LIDS sterile zone dataset and achieves an F1-score of 0.40-0.69. The performance of the analysis algorithm for multi-target tracking has been evaluated using the CAVIAR dataset and achieves an MOTP of around 9.7 and MOTA of 0.17-0.25. On a selection of targets in videos from other datasets, the achieved MOTP and MOTA are 8.8-10.5 and 0.32-0.49 respectively. The execution time on a PC-based platform is 36 ms. This includes the 20 ms for generating motion vectors, which are also required by the video encoder.
NASA Astrophysics Data System (ADS)
Maalek, R.; Lichti, D. D.; Ruwanpura, J.
2015-08-01
The application of terrestrial laser scanners (TLSs) on construction sites for automating construction progress monitoring and controlling structural dimension compliance is growing markedly. However, current research in construction management relies on the planned building information model (BIM) to assign the accumulated point clouds to their corresponding structural elements, which may not be reliable in cases where the dimensions of the as-built structure differ from those of the planned model and/or the planned model is not available with sufficient detail. In addition outliers exist in construction site datasets due to data artefacts caused by moving objects, occlusions and dust. In order to overcome the aforementioned limitations, a novel method for robust classification and segmentation of planar and linear features is proposed to reduce the effects of outliers present in the LiDAR data collected from construction sites. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a robust clustering method. A method is also proposed to robustly extract the points belonging to the flat-slab floors and/or ceilings without performing the aforementioned stages in order to preserve computational efficiency. The applicability of the proposed method is investigated in two scenarios, namely, a laboratory with 30 million points and an actual construction site with over 150 million points. The results obtained by the two experiments validate the suitability of the proposed method for robust segmentation of planar and linear features in contaminated datasets, such as those collected from construction sites.
Wu, Jing; Philip, Ana-Maria; Podkowinski, Dominika; Gerendas, Bianca S; Langs, Georg; Simader, Christian; Waldstein, Sebastian M; Schmidt-Erfurth, Ursula M
2016-01-01
Development of image analysis and machine learning methods for segmentation of clinically significant pathology in retinal spectral-domain optical coherence tomography (SD-OCT), used in disease detection and prediction, is limited due to the availability of expertly annotated reference data. Retinal segmentation methods use datasets that either are not publicly available, come from only one device, or use different evaluation methodologies making them difficult to compare. Thus we present and evaluate a multiple expert annotated reference dataset for the problem of intraretinal cystoid fluid (IRF) segmentation, a key indicator in exudative macular disease. In addition, a standardized framework for segmentation accuracy evaluation, applicable to other pathological structures, is presented. Integral to this work is the dataset used which must be fit for purpose for IRF segmentation algorithm training and testing. We describe here a multivendor dataset comprised of 30 scans. Each OCT scan for system training has been annotated by multiple graders using a proprietary system. Evaluation of the intergrader annotations shows a good correlation, thus making the reproducibly annotated scans suitable for the training and validation of image processing and machine learning based segmentation methods. The dataset will be made publicly available in the form of a segmentation Grand Challenge.
Wu, Jing; Philip, Ana-Maria; Podkowinski, Dominika; Gerendas, Bianca S.; Langs, Georg; Simader, Christian
2016-01-01
Development of image analysis and machine learning methods for segmentation of clinically significant pathology in retinal spectral-domain optical coherence tomography (SD-OCT), used in disease detection and prediction, is limited due to the availability of expertly annotated reference data. Retinal segmentation methods use datasets that either are not publicly available, come from only one device, or use different evaluation methodologies making them difficult to compare. Thus we present and evaluate a multiple expert annotated reference dataset for the problem of intraretinal cystoid fluid (IRF) segmentation, a key indicator in exudative macular disease. In addition, a standardized framework for segmentation accuracy evaluation, applicable to other pathological structures, is presented. Integral to this work is the dataset used which must be fit for purpose for IRF segmentation algorithm training and testing. We describe here a multivendor dataset comprised of 30 scans. Each OCT scan for system training has been annotated by multiple graders using a proprietary system. Evaluation of the intergrader annotations shows a good correlation, thus making the reproducibly annotated scans suitable for the training and validation of image processing and machine learning based segmentation methods. The dataset will be made publicly available in the form of a segmentation Grand Challenge. PMID:27579177
3D variational brain tumor segmentation on a clustered feature set
NASA Astrophysics Data System (ADS)
Popuri, Karteek; Cobzas, Dana; Jagersand, Martin; Shah, Sirish L.; Murtha, Albert
2009-02-01
Tumor segmentation from MRI data is a particularly challenging and time consuming task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. Our work addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multi-dimensional feature set. Further, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this paper is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to the previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned inside and outside region voxel probabilities in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance, during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters in the ventricles to be in the tumor and hence better disambiguate the tumor from brain tissue. We show the performance of our method on real MRI scans. The experimental dataset includes MRI scans, from patients with difficult instances, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Our method shows good results on these test cases.
Song, Yingchao; Luo, Haibo; Ma, Junkai; Hui, Bin; Chang, Zheng
2018-04-01
Sky detection plays an essential role in various computer vision applications. Most existing sky detection approaches, being trained on ideal dataset, may lose efficacy when facing unfavorable conditions like the effects of weather and lighting conditions. In this paper, a novel algorithm for sky detection in hazy images is proposed from the perspective of probing the density of haze. We address the problem by an image segmentation and a region-level classification. To characterize the sky of hazy scenes, we unprecedentedly introduce several haze-relevant features that reflect the perceptual hazy density and the scene depth. Based on these features, the sky is separated by two imbalance SVM classifiers and a similarity measurement. Moreover, a sky dataset (named HazySky) with 500 annotated hazy images is built for model training and performance evaluation. To evaluate the performance of our method, we conducted extensive experiments both on our HazySky dataset and the SkyFinder dataset. The results demonstrate that our method performs better on the detection accuracy than previous methods, not only under hazy scenes, but also under other weather conditions.
Song, Yingchao; Luo, Haibo; Ma, Junkai; Hui, Bin; Chang, Zheng
2018-01-01
Sky detection plays an essential role in various computer vision applications. Most existing sky detection approaches, being trained on ideal dataset, may lose efficacy when facing unfavorable conditions like the effects of weather and lighting conditions. In this paper, a novel algorithm for sky detection in hazy images is proposed from the perspective of probing the density of haze. We address the problem by an image segmentation and a region-level classification. To characterize the sky of hazy scenes, we unprecedentedly introduce several haze-relevant features that reflect the perceptual hazy density and the scene depth. Based on these features, the sky is separated by two imbalance SVM classifiers and a similarity measurement. Moreover, a sky dataset (named HazySky) with 500 annotated hazy images is built for model training and performance evaluation. To evaluate the performance of our method, we conducted extensive experiments both on our HazySky dataset and the SkyFinder dataset. The results demonstrate that our method performs better on the detection accuracy than previous methods, not only under hazy scenes, but also under other weather conditions. PMID:29614778
Food Recognition: A New Dataset, Experiments, and Results.
Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo
2017-05-01
We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.
Yi, Chucai; Tian, Yingli
2012-09-01
In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.
Ghafoorian, Mohsen; Karssemeijer, Nico; Heskes, Tom; van Uden, Inge W M; Sanchez, Clara I; Litjens, Geert; de Leeuw, Frank-Erik; van Ginneken, Bram; Marchiori, Elena; Platel, Bram
2017-07-11
The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).
Discriminative confidence estimation for probabilistic multi-atlas label fusion.
Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard
2017-12-01
Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.
Costa, Marta; Manton, James D; Ostrovsky, Aaron D; Prohaska, Steffen; Jefferis, Gregory S X E
2016-07-20
Neural circuit mapping is generating datasets of tens of thousands of labeled neurons. New computational tools are needed to search and organize these data. We present NBLAST, a sensitive and rapid algorithm, for measuring pairwise neuronal similarity. NBLAST considers both position and local geometry, decomposing neurons into short segments; matched segments are scored using a probabilistic scoring matrix defined by statistics of matches and non-matches. We validated NBLAST on a published dataset of 16,129 single Drosophila neurons. NBLAST can distinguish neuronal types down to the finest level (single identified neurons) without a priori information. Cluster analysis of extensively studied neuronal classes identified new types and unreported topographical features. Fully automated clustering organized the validation dataset into 1,052 clusters, many of which map onto previously described neuronal types. NBLAST supports additional query types, including searching neurons against transgene expression patterns. Finally, we show that NBLAST is effective with data from other invertebrates and zebrafish. VIDEO ABSTRACT. Copyright © 2016 MRC Laboratory of Molecular Biology. Published by Elsevier Inc. All rights reserved.
High performance computing environment for multidimensional image analysis
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-01-01
Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099
High performance computing environment for multidimensional image analysis.
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-07-10
The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.
Exploring 3D Human Action Recognition: from Offline to Online.
Liu, Zhenyu; Li, Rui; Tan, Jianrong
2018-02-20
With the introduction of cost-effective depth sensors, a tremendous amount of research has been devoted to studying human action recognition using 3D motion data. However, most existing methods work in an offline fashion, i.e., they operate on a segmented sequence. There are a few methods specifically designed for online action recognition, which continually predicts action labels as a stream sequence proceeds. In view of this fact, we propose a question: can we draw inspirations and borrow techniques or descriptors from existing offline methods, and then apply these to online action recognition? Note that extending offline techniques or descriptors to online applications is not straightforward, since at least two problems-including real-time performance and sequence segmentation-are usually not considered in offline action recognition. In this paper, we give a positive answer to the question. To develop applicable online action recognition methods, we carefully explore feature extraction, sequence segmentation, computational costs, and classifier selection. The effectiveness of the developed methods is validated on the MSR 3D Online Action dataset and the MSR Daily Activity 3D dataset.
An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.
2015-01-01
Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.
Deformable templates guided discriminative models for robust 3D brain MRI segmentation.
Liu, Cheng-Yi; Iglesias, Juan Eugenio; Tu, Zhuowen
2013-10-01
Automatically segmenting anatomical structures from 3D brain MRI images is an important task in neuroimaging. One major challenge is to design and learn effective image models accounting for the large variability in anatomy and data acquisition protocols. A deformable template is a type of generative model that attempts to explicitly match an input image with a template (atlas), and thus, they are robust against global intensity changes. On the other hand, discriminative models combine local image features to capture complex image patterns. In this paper, we propose a robust brain image segmentation algorithm that fuses together deformable templates and informative features. It takes advantage of the adaptation capability of the generative model and the classification power of the discriminative models. The proposed algorithm achieves both robustness and efficiency, and can be used to segment brain MRI images with large anatomical variations. We perform an extensive experimental study on four datasets of T1-weighted brain MRI data from different sources (1,082 MRI scans in total) and observe consistent improvement over the state-of-the-art systems.
A neural network approach to lung nodule segmentation
NASA Astrophysics Data System (ADS)
Hu, Yaoxiu; Menon, Prahlad G.
2016-03-01
Computed tomography (CT) imaging is a sensitive and specific lung cancer screening tool for the high-risk population and shown to be promising for detection of lung cancer. This study proposes an automatic methodology for detecting and segmenting lung nodules from CT images. The proposed methods begin with thorax segmentation, lung extraction and reconstruction of the original shape of the parenchyma using morphology operations. Next, a multi-scale hessian-based vesselness filter is applied to extract lung vasculature in lung. The lung vasculature mask is subtracted from the lung region segmentation mask to extract 3D regions representing candidate pulmonary nodules. Finally, the remaining structures are classified as nodules through shape and intensity features which are together used to train an artificial neural network. Up to 75% sensitivity and 98% specificity was achieved for detection of lung nodules in our testing dataset, with an overall accuracy of 97.62%+/-0.72% using 11 selected features as input to the neural network classifier, based on 4-fold cross-validation studies. Receiver operator characteristics for identifying nodules revealed an area under curve of 0.9476.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yanrong; Shao, Yeqin; Gao, Yaozong
Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integratemore » the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. Results: The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. Conclusions: A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images.« less
He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan
2018-01-01
Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.
Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm
NASA Astrophysics Data System (ADS)
Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter
2004-05-01
The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.
A variational approach to liver segmentation using statistics from multiple sources
NASA Astrophysics Data System (ADS)
Zheng, Shenhai; Fang, Bin; Li, Laquan; Gao, Mingqi; Wang, Yi
2018-01-01
Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of 6.5 +/- 2.8 % , the best root mean square symmetric surface distance (RMSD) of 2.1 +/- 0.8 mm, the best maximum symmetric surface distance (MSD) of 18.9 +/- 8.3 mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of 0.8 +/- 0.5 mm, the best RMSD of 1.5 +/- 1.1 mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.
Electronic Gaming Machine (EGM) Environments: Market Segments and Risk.
Rockloff, Matthew; Moskovsky, Neda; Thorne, Hannah; Browne, Matthew; Bryden, Gabrielle
2017-12-01
This study used a marketing-research paradigm to explore gamblers' attraction to EGMs based on different elements of the environment. A select set of environmental features was sourced from a prior study (Thorne et al. in J Gambl Issues 2016b), and a discrete choice experiment was conducted through an online survey. Using the same dataset first described by Rockloff et al. (EGM Environments that contribute to excess consumption and harm, 2015), a sample of 245 EGM gamblers were sourced from clubs in Victoria, Australia, and 7516 gamblers from an Australian national online survey-panel. Participants' choices amongst sets of hypothetical gambling environments allowed for an estimation of the implied individual-level utilities for each feature (e.g., general sounds, location, etc.). K-means clustering on these utilities identified four unique market segments for EGM gambling, representing four different types of consumers. The segments were named according to their dominant features: Social, Value, High Roller and Internet. We found that the environments orientated towards the Social and Value segments were most conducive to attracting players with relatively few gambling problems, while the High Roller and Internet-focused environments had greater appeal for players with problems and vulnerabilities. This study has generated new insights into the kinds of gambling environments that are most consistent with safe play.
Memari, Nogol; Ramli, Abd Rahman; Bin Saripan, M Iqbal; Mashohor, Syamsiah; Moghbel, Mehrdad
2017-01-01
The structure and appearance of the blood vessel network in retinal fundus images is an essential part of diagnosing various problems associated with the eyes, such as diabetes and hypertension. In this paper, an automatic retinal vessel segmentation method utilizing matched filter techniques coupled with an AdaBoost classifier is proposed. The fundus image is enhanced using morphological operations, the contrast is increased using contrast limited adaptive histogram equalization (CLAHE) method and the inhomogeneity is corrected using Retinex approach. Then, the blood vessels are enhanced using a combination of B-COSFIRE and Frangi matched filters. From this preprocessed image, different statistical features are computed on a pixel-wise basis and used in an AdaBoost classifier to extract the blood vessel network inside the image. Finally, the segmented images are postprocessed to remove the misclassified pixels and regions. The proposed method was validated using publicly accessible Digital Retinal Images for Vessel Extraction (DRIVE), Structured Analysis of the Retina (STARE) and Child Heart and Health Study in England (CHASE_DB1) datasets commonly used for determining the accuracy of retinal vessel segmentation methods. The accuracy of the proposed segmentation method was comparable to other state of the art methods while being very close to the manual segmentation provided by the second human observer with an average accuracy of 0.972, 0.951 and 0.948 in DRIVE, STARE and CHASE_DB1 datasets, respectively.
NASA Astrophysics Data System (ADS)
Ben-Zikri, Yehuda Kfir; Linte, Cristian A.
2016-03-01
Region of interest detection is a precursor to many medical image processing and analysis applications, including segmentation, registration and other image manipulation techniques. The optimal region of interest is often selected manually, based on empirical knowledge and features of the image dataset. However, if inconsistently identified, the selected region of interest may greatly affect the subsequent image analysis or interpretation steps, in turn leading to incomplete assessment during computer-aided diagnosis or incomplete visualization or identification of the surgical targets, if employed in the context of pre-procedural planning or image-guided interventions. Therefore, the need for robust, accurate and computationally efficient region of interest localization techniques is prevalent in many modern computer-assisted diagnosis and therapy applications. Here we propose a fully automated, robust, a priori learning-based approach that provides reliable estimates of the left and right ventricle features from cine cardiac MR images. The proposed approach leverages the temporal frame-to-frame motion extracted across a range of short axis left ventricle slice images with small training set generated from les than 10% of the population. This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion. This region is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques. Lastly, the optimal left ventricle region of interest of the test image is identified based on the correlation of known ground truth segmentations associated with the training dataset deemed closest to the test image. The proposed approach was tested on a population of 100 patient datasets and was validated against the ground truth region of interest of the test images manually annotated by experts. This tool successfully identified a mask around the LV and RV and furthermore the minimal region of interest around the LV that fully enclosed the left ventricle from all testing datasets, yielding a 98% overlap with their corresponding ground truth. The achieved mean absolute distance error between the two contours that normalized by the radius of the ground truth is 0.20 +/- 0.09.
A random forest model based classification scheme for neonatal amplitude-integrated EEG.
Chen, Weiting; Wang, Yu; Cao, Guitao; Chen, Guoqiang; Gu, Qiufang
2014-01-01
Modern medical advances have greatly increased the survival rate of infants, while they remain in the higher risk group for neurological problems later in life. For the infants with encephalopathy or seizures, identification of the extent of brain injury is clinically challenging. Continuous amplitude-integrated electroencephalography (aEEG) monitoring offers a possibility to directly monitor the brain functional state of the newborns over hours, and has seen an increasing application in neonatal intensive care units (NICUs). This paper presents a novel combined feature set of aEEG and applies random forest (RF) method to classify aEEG tracings. To that end, a series of experiments were conducted on 282 aEEG tracing cases (209 normal and 73 abnormal ones). Basic features, statistic features and segmentation features were extracted from both the tracing as a whole and the segmented recordings, and then form a combined feature set. All the features were sent to a classifier afterwards. The significance of feature, the data segmentation, the optimization of RF parameters, and the problem of imbalanced datasets were examined through experiments. Experiments were also done to evaluate the performance of RF on aEEG signal classifying, compared with several other widely used classifiers including SVM-Linear, SVM-RBF, ANN, Decision Tree (DT), Logistic Regression(LR), ML, and LDA. The combined feature set can better characterize aEEG signals, compared with basic features, statistic features and segmentation features respectively. With the combined feature set, the proposed RF-based aEEG classification system achieved a correct rate of 92.52% and a high F1-score of 95.26%. Among all of the seven classifiers examined in our work, the RF method got the highest correct rate, sensitivity, specificity, and F1-score, which means that RF outperforms all of the other classifiers considered here. The results show that the proposed RF-based aEEG classification system with the combined feature set is efficient and helpful to better detect the brain disorders in newborns.
Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C
2015-06-08
Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.
Automated classification of optical coherence tomography images of human atrial tissue
NASA Astrophysics Data System (ADS)
Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.
2016-10-01
Tissue composition of the atria plays a critical role in the pathology of cardiovascular disease, tissue remodeling, and arrhythmogenic substrates. Optical coherence tomography (OCT) has the ability to capture the tissue composition information of the human atria. In this study, we developed a region-based automated method to classify tissue compositions within human atria samples within OCT images. We segmented regional information without prior information about the tissue architecture and subsequently extracted features within each segmented region. A relevance vector machine model was used to perform automated classification. Segmentation of human atrial ex vivo datasets was correlated with trichrome histology and our classification algorithm had an average accuracy of 80.41% for identifying adipose, myocardium, fibrotic myocardium, and collagen tissue compositions.
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei
2015-03-01
A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.
A supervoxel-based segmentation method for prostate MR images.
Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Xue, Jianru; Fei, Baowei
2017-02-01
Segmentation of the prostate on MR images has many applications in prostate cancer management. In this work, we propose a supervoxel-based segmentation method for prostate MR images. A supervoxel is a set of pixels that have similar intensities, locations, and textures in a 3D image volume. The prostate segmentation problem is considered as assigning a binary label to each supervoxel, which is either the prostate or background. A supervoxel-based energy function with data and smoothness terms is used to model the label. The data term estimates the likelihood of a supervoxel belonging to the prostate by using a supervoxel-based shape feature. The geometric relationship between two neighboring supervoxels is used to build the smoothness term. The 3D graph cut is used to minimize the energy function to get the labels of the supervoxels, which yields the prostate segmentation. A 3D active contour model is then used to get a smooth surface by using the output of the graph cut as an initialization. The performance of the proposed algorithm was evaluated on 30 in-house MR image data and PROMISE12 dataset. The mean Dice similarity coefficients are 87.2 ± 2.3% and 88.2 ± 2.8% for our 30 in-house MR volumes and the PROMISE12 dataset, respectively. The proposed segmentation method yields a satisfactory result for prostate MR images. The proposed supervoxel-based method can accurately segment prostate MR images and can have a variety of application in prostate cancer diagnosis and therapy. © 2016 American Association of Physicists in Medicine.
Wang, ShaoPeng; Zhang, Yu-Hang; Huang, GuoHua; Chen, Lei; Cai, Yu-Dong
2017-01-01
Myristoylation is an important hydrophobic post-translational modification that is covalently bound to the amino group of Gly residues on the N-terminus of proteins. The many diverse functions of myristoylation on proteins, such as membrane targeting, signal pathway regulation and apoptosis, are largely due to the lipid modification, whereas abnormal or irregular myristoylation on proteins can lead to several pathological changes in the cell. To better understand the function of myristoylated sites and to correctly identify them in protein sequences, this study conducted a novel computational investigation on identifying myristoylation sites in protein sequences. A training dataset with 196 positive and 84 negative peptide segments were obtained. Four types of features derived from the peptide segments following the myristoylation sites were used to specify myristoylatedand non-myristoylated sites. Then, feature selection methods including maximum relevance and minimum redundancy (mRMR), incremental feature selection (IFS), and a machine learning algorithm (extreme learning machine method) were adopted to extract optimal features for the algorithm to identify myristoylation sites in protein sequences, thereby building an optimal prediction model. As a result, 41 key features were extracted and used to build an optimal prediction model. The effectiveness of the optimal prediction model was further validated by its performance on a test dataset. Furthermore, detailed analyses were also performed on the extracted 41 features to gain insight into the mechanism of myristoylation modification. This study provided a new computational method for identifying myristoylation sites in protein sequences. We believe that it can be a useful tool to predict myristoylation sites from protein sequences. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Semi-automatic segmentation of brain tumors using population and individual information.
Wu, Yao; Yang, Wei; Jiang, Jun; Li, Shuanqian; Feng, Qianjin; Chen, Wufan
2013-08-01
Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.
Ji, Zexuan; Chen, Qiang; Niu, Sijie; Leng, Theodore; Rubin, Daniel L.
2018-01-01
Purpose To automatically and accurately segment geographic atrophy (GA) in spectral-domain optical coherence tomography (SD-OCT) images by constructing a voting system with deep neural networks without the use of retinal layer segmentation. Methods An automatic GA segmentation method for SD-OCT images based on the deep network was constructed. The structure of the deep network was composed of five layers, including one input layer, three hidden layers, and one output layer. During the training phase, the labeled A-scans with 1024 features were directly fed into the network as the input layer to obtain the deep representations. Then a soft-max classifier was trained to determine the label of each individual pixel. Finally, a voting decision strategy was used to refine the segmentation results among 10 trained models. Results Two image data sets with GA were used to evaluate the model. For the first dataset, our algorithm obtained a mean overlap ratio (OR) 86.94% ± 8.75%, absolute area difference (AAD) 11.49% ± 11.50%, and correlation coefficients (CC) 0.9857; for the second dataset, the mean OR, AAD, and CC of the proposed method were 81.66% ± 10.93%, 8.30% ± 9.09%, and 0.9952, respectively. The proposed algorithm was capable of improving over 5% and 10% segmentation accuracy, respectively, when compared with several state-of-the-art algorithms on two data sets. Conclusions Without retinal layer segmentation, the proposed algorithm could produce higher segmentation accuracy and was more stable when compared with state-of-the-art methods that relied on retinal layer segmentation results. Our model may provide reliable GA segmentations from SD-OCT images and be useful in the clinical diagnosis of advanced nonexudative AMD. Translational Relevance Based on the deep neural networks, this study presents an accurate GA segmentation method for SD-OCT images without using any retinal layer segmentation results, and may contribute to improved understanding of advanced nonexudative AMD. PMID:29302382
Ji, Zexuan; Chen, Qiang; Niu, Sijie; Leng, Theodore; Rubin, Daniel L
2018-01-01
To automatically and accurately segment geographic atrophy (GA) in spectral-domain optical coherence tomography (SD-OCT) images by constructing a voting system with deep neural networks without the use of retinal layer segmentation. An automatic GA segmentation method for SD-OCT images based on the deep network was constructed. The structure of the deep network was composed of five layers, including one input layer, three hidden layers, and one output layer. During the training phase, the labeled A-scans with 1024 features were directly fed into the network as the input layer to obtain the deep representations. Then a soft-max classifier was trained to determine the label of each individual pixel. Finally, a voting decision strategy was used to refine the segmentation results among 10 trained models. Two image data sets with GA were used to evaluate the model. For the first dataset, our algorithm obtained a mean overlap ratio (OR) 86.94% ± 8.75%, absolute area difference (AAD) 11.49% ± 11.50%, and correlation coefficients (CC) 0.9857; for the second dataset, the mean OR, AAD, and CC of the proposed method were 81.66% ± 10.93%, 8.30% ± 9.09%, and 0.9952, respectively. The proposed algorithm was capable of improving over 5% and 10% segmentation accuracy, respectively, when compared with several state-of-the-art algorithms on two data sets. Without retinal layer segmentation, the proposed algorithm could produce higher segmentation accuracy and was more stable when compared with state-of-the-art methods that relied on retinal layer segmentation results. Our model may provide reliable GA segmentations from SD-OCT images and be useful in the clinical diagnosis of advanced nonexudative AMD. Based on the deep neural networks, this study presents an accurate GA segmentation method for SD-OCT images without using any retinal layer segmentation results, and may contribute to improved understanding of advanced nonexudative AMD.
NASA Astrophysics Data System (ADS)
Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.
2005-04-01
Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.
NASA Astrophysics Data System (ADS)
Cruz-Roa, Angel; Xu, Jun; Madabhushi, Anant
2015-01-01
Nuclear architecture or the spatial arrangement of individual cancer nuclei on histopathology images has been shown to be associated with different grades and differential risk for a number of solid tumors such as breast, prostate, and oropharyngeal. Graph-based representations of individual nuclei (nuclei representing the graph nodes) allows for mining of quantitative metrics to describe tumor morphology. These graph features can be broadly categorized into global and local depending on the type of graph construction method. While a number of local graph (e.g. Cell Cluster Graphs) and global graph (e.g. Voronoi, Delaunay Triangulation, Minimum Spanning Tree) features have been shown to associated with cancer grade, risk, and outcome for different cancer types, the sensitivity of the preceding segmentation algorithms in identifying individual nuclei can have a significant bearing on the discriminability of the resultant features. This therefore begs the question as to which features while being discriminative of cancer grade and aggressiveness are also the most resilient to the segmentation errors. These properties are particularly desirable in the context of digital pathology images, where the method of slide preparation, staining, and type of nuclear segmentation algorithm employed can all dramatically affect the quality of the nuclear graphs and corresponding features. In this paper we evaluated the trade off between discriminability and stability of both global and local graph-based features in conjunction with a few different segmentation algorithms and in the context of two different histopathology image datasets of breast cancer from whole-slide images (WSI) and tissue microarrays (TMA). Specifically in this paper we investigate a few different performance measures including stability, discriminability and stability vs discriminability trade off, all of which are based on p-values from the Kruskal-Wallis one-way analysis of variance for local and global graph features. Apart from identifying the set of local and global features that satisfied the trade off between stability and discriminability, our most interesting finding was that a simple segmentation method was sufficient to identify the most discriminant features for invasive tumour detection in TMAs, whereas for tumour grading in WSI, the graph based features were more sensitive to the accuracy of the segmentation algorithm employed.
Monocular Vision-Based Underwater Object Detection
Zhang, Zhen; Dai, Fengzhao; Bu, Yang; Wang, Huibin
2017-01-01
In this paper, we propose an underwater object detection method using monocular vision sensors. In addition to commonly used visual features such as color and intensity, we investigate the potential of underwater object detection using light transmission information. The global contrast of various features is used to initially identify the region of interest (ROI), which is then filtered by the image segmentation method, producing the final underwater object detection results. We test the performance of our method with diverse underwater datasets. Samples of the datasets are acquired by a monocular camera with different qualities (such as resolution and focal length) and setups (viewing distance, viewing angle, and optical environment). It is demonstrated that our ROI detection method is necessary and can largely remove the background noise and significantly increase the accuracy of our underwater object detection method. PMID:28771194
Wu, Guorong; Kim, Minjeong; Sanroma, Gerard; Wang, Qian; Munsell, Brent C.; Shen, Dinggang
2014-01-01
Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images, after registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it’s critical the chosen patch similarity measurement accurately captures the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch now characterized by a multi-scale feature representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchically approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 tesla MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale feature representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods. PMID:25463474
Measurement of glucose concentration by image processing of thin film slides
NASA Astrophysics Data System (ADS)
Piramanayagam, Sankaranaryanan; Saber, Eli; Heavner, David
2012-02-01
Measurement of glucose concentration is important for diagnosis and treatment of diabetes mellitus and other medical conditions. This paper describes a novel image-processing based approach for measuring glucose concentration. A fluid drop (patient sample) is placed on a thin film slide. Glucose, present in the sample, reacts with reagents on the slide to produce a color dye. The color intensity of the dye formed varies with glucose at different concentration levels. Current methods use spectrophotometry to determine the glucose level of the sample. Our proposed algorithm uses an image of the slide, captured at a specific wavelength, to automatically determine glucose concentration. The algorithm consists of two phases: training and testing. Training datasets consist of images at different concentration levels. The dye-occupied image region is first segmented using a Hough based technique and then an intensity based feature is calculated from the segmented region. Subsequently, a mathematical model that describes a relationship between the generated feature values and the given concentrations is obtained. During testing, the dye region of a test slide image is segmented followed by feature extraction. These two initial steps are similar to those done in training. However, in the final step, the algorithm uses the model (feature vs. concentration) obtained from the training and feature generated from test image to predict the unknown concentration. The performance of the image-based analysis was compared with that of a standard glucose analyzer.
Computer-aided Classification of Mammographic Masses Using Visually Sensitive Image Features
Wang, Yunzhi; Aghaei, Faranak; Zarafshani, Ali; Qiu, Yuchen; Qian, Wei; Zheng, Bin
2017-01-01
Purpose To develop a new computer-aided diagnosis (CAD) scheme that computes visually sensitive image features routinely used by radiologists to develop a machine learning classifier and distinguish between the malignant and benign breast masses detected from digital mammograms. Methods An image dataset including 301 breast masses was retrospectively selected. From each segmented mass region, we computed image features that mimic five categories of visually sensitive features routinely used by radiologists in reading mammograms. We then selected five optimal features in the five feature categories and applied logistic regression models for classification. A new CAD interface was also designed to show lesion segmentation, computed feature values and classification score. Results Areas under ROC curves (AUC) were 0.786±0.026 and 0.758±0.027 when to classify mass regions depicting on two view images, respectively. By fusing classification scores computed from two regions, AUC increased to 0.806±0.025. Conclusion This study demonstrated a new approach to develop CAD scheme based on 5 visually sensitive image features. Combining with a “visual aid” interface, CAD results may be much more easily explainable to the observers and increase their confidence to consider CAD generated classification results than using other conventional CAD approaches, which involve many complicated and visually insensitive texture features. PMID:27911353
Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans
NASA Astrophysics Data System (ADS)
Lassen, B. C.; Jacobs, C.; Kuhnigk, J.-M.; van Ginneken, B.; van Rikxoort, E. M.
2015-02-01
The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of subsolid nodules in clinical routine.
Fast Automatic Segmentation of White Matter Streamlines Based on a Multi-Subject Bundle Atlas.
Labra, Nicole; Guevara, Pamela; Duclap, Delphine; Houenou, Josselin; Poupon, Cyril; Mangin, Jean-François; Figueroa, Miguel
2017-01-01
This paper presents an algorithm for fast segmentation of white matter bundles from massive dMRI tractography datasets using a multisubject atlas. We use a distance metric to compare streamlines in a subject dataset to labeled centroids in the atlas, and label them using a per-bundle configurable threshold. In order to reduce segmentation time, the algorithm first preprocesses the data using a simplified distance metric to rapidly discard candidate streamlines in multiple stages, while guaranteeing that no false negatives are produced. The smaller set of remaining streamlines is then segmented using the original metric, thus eliminating any false positives from the preprocessing stage. As a result, a single-thread implementation of the algorithm can segment a dataset of almost 9 million streamlines in less than 6 minutes. Moreover, parallel versions of our algorithm for multicore processors and graphics processing units further reduce the segmentation time to less than 22 seconds and to 5 seconds, respectively. This performance enables the use of the algorithm in truly interactive applications for visualization, analysis, and segmentation of large white matter tractography datasets.
Image segmentation evaluation for very-large datasets
NASA Astrophysics Data System (ADS)
Reeves, Anthony P.; Liu, Shuang; Xie, Yiting
2016-03-01
With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.
NASA Astrophysics Data System (ADS)
Rahmani, K.; Mayer, H.
2018-05-01
In this paper we present a pipeline for high quality semantic segmentation of building facades using Structured Random Forest (SRF), Region Proposal Network (RPN) based on a Convolutional Neural Network (CNN) as well as rectangular fitting optimization. Our main contribution is that we employ features created by the RPN as channels in the SRF.We empirically show that this is very effective especially for doors and windows. Our pipeline is evaluated on two datasets where we outperform current state-of-the-art methods. Additionally, we quantify the contribution of the RPN and the rectangular fitting optimization on the accuracy of the result.
Segmentation of mosaicism in cervicographic images using support vector machines
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Long, L. Rodney; Antani, Sameer; Jeronimo, Jose; Thoma, George R.
2009-02-01
The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating a large digital repository of cervicographic images for the study of uterine cervix cancer prevention. One of the research goals is to automatically detect diagnostic bio-markers in these images. Reliable bio-marker segmentation in large biomedical image collections is a challenging task due to the large variation in image appearance. Methods described in this paper focus on segmenting mosaicism, which is an important vascular feature used to visually assess the degree of cervical intraepithelial neoplasia. The proposed approach uses support vector machines (SVM) trained on a ground truth dataset annotated by medical experts (which circumvents the need for vascular structure extraction). We have evaluated the performance of the proposed algorithm and experimentally demonstrated its feasibility.
Xu, Jun; Luo, Xiaofei; Wang, Guanhao; Gilmore, Hannah; Madabhushi, Anant
2016-01-01
Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions. PMID:28154470
Xu, Jun; Luo, Xiaofei; Wang, Guanhao; Gilmore, Hannah; Madabhushi, Anant
2016-05-26
Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions.
Teodoro, George; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Saltz, Joel
2014-01-01
We study and characterize the performance of operations in an important class of applications on GPUs and Many Integrated Core (MIC) architectures. Our work is motivated by applications that analyze low-dimensional spatial datasets captured by high resolution sensors, such as image datasets obtained from whole slide tissue specimens using microscopy scanners. Common operations in these applications involve the detection and extraction of objects (object segmentation), the computation of features of each extracted object (feature computation), and characterization of objects based on these features (object classification). In this work, we have identify the data access and computation patterns of operations in the object segmentation and feature computation categories. We systematically implement and evaluate the performance of these operations on modern CPUs, GPUs, and MIC systems for a microscopy image analysis application. Our results show that the performance on a MIC of operations that perform regular data access is comparable or sometimes better than that on a GPU. On the other hand, GPUs are significantly more efficient than MICs for operations that access data irregularly. This is a result of the low performance of MICs when it comes to random data access. We also have examined the coordinated use of MICs and CPUs. Our experiments show that using a performance aware task strategy for scheduling application operations improves performance about 1.29× over a first-come-first-served strategy. This allows applications to obtain high performance efficiency on CPU-MIC systems - the example application attained an efficiency of 84% on 192 nodes (3072 CPU cores and 192 MICs). PMID:25419088
Local X-ray Computed Tomography Imaging for Mineralogical and Pore Characterization
NASA Astrophysics Data System (ADS)
Mills, G.; Willson, C. S.
2015-12-01
Sample size, material properties and image resolution are all tradeoffs that must be considered when imaging porous media samples with X-ray computed tomography. In many natural and engineered samples, pore and throat sizes span several orders of magnitude and are often correlated with the material composition. Local tomography is a nondestructive technique that images a subvolume, within a larger specimen, at high resolution and uses low-resolution tomography data from the larger specimen to reduce reconstruction error. The high-resolution, subvolume data can be used to extract important fine-scale properties but, due to the additional noise associated with the truncated dataset, it makes segmentation of different materials and mineral phases a challenge. The low-resolution data of a larger specimen is typically of much higher-quality making material characterization much easier. In addition, the imaging of a larger domain, allows for mm-scale bulk properties and heterogeneities to be determined. In this research, a 7 mm diameter and ~15 mm in length sandstone core was scanned twice. The first scan was performed to cover the entire diameter and length of the specimen at an image voxel resolution of 4.1 μm. The second scan was performed on a subvolume, ~1.3 mm in length and ~2.1 mm in diameter, at an image voxel resolution of 1.08 μm. After image processing and segmentation, the pore network structure and mineralogical features were extracted from the low-resolution dataset. Due to the noise in the truncated high-resolution dataset, several image processing approaches were applied prior to image segmentation and extraction of the pore network structure and mineralogy. Results from the different truncated tomography segmented data sets are compared to each other to evaluate the potential of each approach in identifying the different solid phases from the original 16 bit data set. The truncated tomography segmented data sets were also compared to the whole-core tomography segmented data set in two ways: (1) assessment of the porosity and pore size distribution at different scales; and (2) comparison of the mineralogical composition and distribution. Finally, registration of the two datasets will be used to show how the pore structure and mineralogy details at the two scales can be used to supplement each other.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L; Tan, S; Lu, W
Purpose: PET images are usually blurred due to the finite spatial resolution, while CT images suffer from low contrast. Segment a tumor from either a single PET or CT image is thus challenging. To make full use of the complementary information between PET and CT, we propose a novel variational method for simultaneous PET image restoration and PET/CT images co-segmentation. Methods: The proposed model was constructed based on the Γ-convergence approximation of Mumford-Shah (MS) segmentation model for PET/CT co-segmentation. Moreover, a PET de-blur process was integrated into the MS model to improve the segmentation accuracy. An interaction edge constraint termmore » over the two modalities were specially designed to share the complementary information. The energy functional was iteratively optimized using an alternate minimization (AM) algorithm. The performance of the proposed method was validated on ten lung cancer cases and five esophageal cancer cases. The ground truth were manually delineated by an experienced radiation oncologist using the complementary visual features of PET and CT. The segmentation accuracy was evaluated by Dice similarity index (DSI) and volume error (VE). Results: The proposed method achieved an expected restoration result for PET image and satisfactory segmentation results for both PET and CT images. For lung cancer dataset, the average DSI (0.72) increased by 0.17 and 0.40 than single PET and CT segmentation. For esophageal cancer dataset, the average DSI (0.85) increased by 0.07 and 0.43 than single PET and CT segmentation. Conclusion: The proposed method took full advantage of the complementary information from PET and CT images. This work was supported in part by the National Cancer Institute Grants R01CA172638. Shan Tan and Laquan Li were supported in part by the National Natural Science Foundation of China, under Grant Nos. 60971112 and 61375018.« less
An application of cascaded 3D fully convolutional networks for medical image segmentation.
Roth, Holger R; Oda, Hirohisa; Zhou, Xiangrong; Shimizu, Natsuki; Yang, Ying; Hayashi, Yuichiro; Oda, Masahiro; Fujiwara, Michitaka; Misawa, Kazunari; Mori, Kensaku
2018-06-01
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ∼10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. 1 . Copyright © 2018 Elsevier Ltd. All rights reserved.
Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R
2010-01-01
We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.
Ross, James C.; Estépar, Raúl San José; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K.; Washko, George R.
2011-01-01
We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases. PMID:20879396
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velazquez, E Rios; Narayan, V; Grossmann, P
2015-06-15
Purpose: To compare the complementary prognostic value of automated Radiomic features to that of radiologist-annotated VASARI features in TCGA-GBM MRI dataset. Methods: For 96 GBM patients, pre-operative MRI images were obtained from The Cancer Imaging Archive. The abnormal tumor bulks were manually defined on post-contrast T1w images. The contrast-enhancing and necrotic regions were segmented using FAST. From these sub-volumes and the total abnormal tumor bulk, a set of Radiomic features quantifying phenotypic differences based on the tumor intensity, shape and texture, were extracted from the post-contrast T1w images. Minimum-redundancy-maximum-relevance (MRMR) was used to identify the most informative Radiomic, VASARI andmore » combined Radiomic-VASARI features in 70% of the dataset (training-set). Multivariate Cox-proportional hazards models were evaluated in 30% of the dataset (validation-set) using the C-index for OS. A bootstrap procedure was used to assess significance while comparing the C-Indices of the different models. Results: Overall, the Radiomic features showed a moderate correlation with the radiologist-annotated VASARI features (r = −0.37 – 0.49); however that correlation was stronger for the Tumor Diameter and Proportion of Necrosis VASARI features (r = −0.71 – 0.69). After MRMR feature selection, the best-performing Radiomic, VASARI, and Radiomic-VASARI Cox-PH models showed a validation C-index of 0.56 (p = NS), 0.58 (p = NS) and 0.65 (p = 0.01), respectively. The combined Radiomic-VASARI model C-index was significantly higher than that obtained from either the Radiomic or VASARI model alone (p = <0.001). Conclusion: Quantitative volumetric and textural Radiomic features complement the qualitative and semi-quantitative annotated VASARI feature set. The prognostic value of informative qualitative VASARI features such as Eloquent Brain and Multifocality is increased with the addition of quantitative volumetric and textural features from the contrast-enhancing and necrotic tumor regions. These results should be further evaluated in larger validation cohorts.« less
CP-CHARM: segmentation-free image classification made accessible.
Uhlmann, Virginie; Singh, Shantanu; Carpenter, Anne E
2016-01-27
Automated classification using machine learning often relies on features derived from segmenting individual objects, which can be difficult to automate. WND-CHARM is a previously developed classification algorithm in which features are computed on the whole image, thereby avoiding the need for segmentation. The algorithm obtained encouraging results but requires considerable computational expertise to execute. Furthermore, some benchmark sets have been shown to be subject to confounding artifacts that overestimate classification accuracy. We developed CP-CHARM, a user-friendly image-based classification algorithm inspired by WND-CHARM in (i) its ability to capture a wide variety of morphological aspects of the image, and (ii) the absence of requirement for segmentation. In order to make such an image-based classification method easily accessible to the biological research community, CP-CHARM relies on the widely-used open-source image analysis software CellProfiler for feature extraction. To validate our method, we reproduced WND-CHARM's results and ensured that CP-CHARM obtained comparable performance. We then successfully applied our approach on cell-based assay data and on tissue images. We designed these new training and test sets to reduce the effect of batch-related artifacts. The proposed method preserves the strengths of WND-CHARM - it extracts a wide variety of morphological features directly on whole images thereby avoiding the need for cell segmentation, but additionally, it makes the methods easily accessible for researchers without computational expertise by implementing them as a CellProfiler pipeline. It has been demonstrated to perform well on a wide range of bioimage classification problems, including on new datasets that have been carefully selected and annotated to minimize batch effects. This provides for the first time a realistic and reliable assessment of the whole image classification strategy.
National Hydrography Dataset (NHD)
,
2001-01-01
The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that make up the nation's surface water drainage system. NHD data was originally developed at 1:100,000 scale and exists at that scale for the whole country. High resolution NHD adds detail to the original 1:100,000-scale NHD. (Data for Alaska, Puerto Rico and the Virgin Islands was developed at high-resolution, not 1:100,000 scale.) Like the 1:100,000-scale NHD, high resolution NHD contains reach codes for networked features and isolated lakes, flow direction, names, stream level, and centerline representations for areal water bodies. Reaches are also defined to represent waterbodies and the approximate shorelines of the Great Lakes, the Atlantic and Pacific Oceans and the Gulf of Mexico. The NHD also incorporates the National Spatial Data Infrastructure framework criteria set out by the Federal Geographic Data Committee.
A deep learning approach for the analysis of masses in mammograms with minimal user intervention.
Dhungel, Neeraj; Carneiro, Gustavo; Bradley, Andrew P
2017-04-01
We present an integrated methodology for detecting, segmenting and classifying breast masses from mammograms with minimal user intervention. This is a long standing problem due to low signal-to-noise ratio in the visualisation of breast masses, combined with their large variability in terms of shape, size, appearance and location. We break the problem down into three stages: mass detection, mass segmentation, and mass classification. For the detection, we propose a cascade of deep learning methods to select hypotheses that are refined based on Bayesian optimisation. For the segmentation, we propose the use of deep structured output learning that is subsequently refined by a level set method. Finally, for the classification, we propose the use of a deep learning classifier, which is pre-trained with a regression to hand-crafted feature values and fine-tuned based on the annotations of the breast mass classification dataset. We test our proposed system on the publicly available INbreast dataset and compare the results with the current state-of-the-art methodologies. This evaluation shows that our system detects 90% of masses at 1 false positive per image, has a segmentation accuracy of around 0.85 (Dice index) on the correctly detected masses, and overall classifies masses as malignant or benign with sensitivity (Se) of 0.98 and specificity (Sp) of 0.7. Copyright © 2017 Elsevier B.V. All rights reserved.
Yilmaz, E; Kayikcioglu, T; Kayipmaz, S
2017-07-01
In this article, we propose a decision support system for effective classification of dental periapical cyst and keratocystic odontogenic tumor (KCOT) lesions obtained via cone beam computed tomography (CBCT). CBCT has been effectively used in recent years for diagnosing dental pathologies and determining their boundaries and content. Unlike other imaging techniques, CBCT provides detailed and distinctive information about the pathologies by enabling a three-dimensional (3D) image of the region to be displayed. We employed 50 CBCT 3D image dataset files as the full dataset of our study. These datasets were identified by experts as periapical cyst and KCOT lesions according to the clinical, radiographic and histopathologic features. Segmentation operations were performed on the CBCT images using viewer software that we developed. Using the tools of this software, we marked the lesional volume of interest and calculated and applied the order statistics and 3D gray-level co-occurrence matrix for each CBCT dataset. A feature vector of the lesional region, including 636 different feature items, was created from those statistics. Six classifiers were used for the classification experiments. The Support Vector Machine (SVM) classifier achieved the best classification performance with 100% accuracy, and 100% F-score (F1) scores as a result of the experiments in which a ten-fold cross validation method was used with a forward feature selection algorithm. SVM achieved the best classification performance with 96.00% accuracy, and 96.00% F1 scores in the experiments in which a split sample validation method was used with a forward feature selection algorithm. SVM additionally achieved the best performance of 94.00% accuracy, and 93.88% F1 in which a leave-one-out (LOOCV) method was used with a forward feature selection algorithm. Based on the results, we determined that periapical cyst and KCOT lesions can be classified with a high accuracy with the models that we built using the new dataset selected for this study. The studies mentioned in this article, along with the selected 3D dataset, 3D statistics calculated from the dataset, and performance results of the different classifiers, comprise an important contribution to the field of computer-aided diagnosis of dental apical lesions. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille; Moxley, Katherine; Moore, Kathleen; Mannel, Robert; Liu, Hong; Zheng, Bin; Qiu, Yuchen
2017-03-01
Predicting metastatic tumor response to chemotherapy at early stage is critically important for improving efficacy of clinical trials of testing new chemotherapy drugs. However, using current response evaluation criteria in solid tumors (RECIST) guidelines only yields a limited accuracy to predict tumor response. In order to address this clinical challenge, we applied Radiomics approach to develop a new quantitative image analysis scheme, aiming to accurately assess the tumor response to new chemotherapy treatment, for the advanced ovarian cancer patients. During the experiment, a retrospective dataset containing 57 patients was assembled, each of which has two sets of CT images: pre-therapy and 4-6 week follow up CT images. A Radiomics based image analysis scheme was then applied on these images, which is composed of three steps. First, the tumors depicted on the CT images were segmented by a hybrid tumor segmentation scheme. Then, a total of 115 features were computed from the segmented tumors, which can be grouped as 1) volume based features; 2) density based features; and 3) wavelet features. Finally, an optimal feature cluster was selected based on the single feature performance and an equal-weighed fusion rule was applied to generate the final predicting score. The results demonstrated that the single feature achieved an area under the receiver operating characteristic curve (AUC) of 0.838+/-0.053. This investigation demonstrates that the Radiomic approach may have the potential in the development of high accuracy predicting model for early stage prognostic assessment of ovarian cancer patients.
Segmentation of Unstructured Datasets
NASA Technical Reports Server (NTRS)
Bhat, Smitha
1996-01-01
Datasets generated by computer simulations and experiments in Computational Fluid Dynamics tend to be extremely large and complex. It is difficult to visualize these datasets using standard techniques like Volume Rendering and Ray Casting. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This thesis explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and from Finite Element Analysis.
FogBank: a single cell segmentation across multiple cell lines and image modalities.
Chalfoun, Joe; Majurski, Michael; Dima, Alden; Stuelten, Christina; Peskin, Adele; Brady, Mary
2014-12-30
Many cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies. We present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation. First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce. We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images. FogBank produces single cell segmentation from confluent cell sheets with high accuracy. It can be applied to microscopy images of multiple cell lines and a variety of imaging modalities. The code for the segmentation method is available as open-source and includes a Graphical User Interface for user friendly execution.
Variable Selection for Road Segmentation in Aerial Images
NASA Astrophysics Data System (ADS)
Warnke, S.; Bulatov, D.
2017-05-01
For extraction of road pixels from combined image and elevation data, Wegner et al. (2015) proposed classification of superpixels into road and non-road, after which a refinement of the classification results using minimum cost paths and non-local optimization methods took place. We believed that the variable set used for classification was to a certain extent suboptimal, because many variables were redundant while several features known as useful in Photogrammetry and Remote Sensing are missed. This motivated us to implement a variable selection approach which builds a model for classification using portions of training data and subsets of features, evaluates this model, updates the feature set, and terminates when a stopping criterion is satisfied. The choice of classifier is flexible; however, we tested the approach with Logistic Regression and Random Forests, and taylored the evaluation module to the chosen classifier. To guarantee a fair comparison, we kept the segment-based approach and most of the variables from the related work, but we extended them by additional, mostly higher-level features. Applying these superior features, removing the redundant ones, as well as using more accurately acquired 3D data allowed to keep stable or even to reduce the misclassification error in a challenging dataset.
Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).
Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad
2018-04-01
A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kromp, Florian; Taschner-Mandl, Sabine; Schwarz, Magdalena; Blaha, Johanna; Weiss, Tamara; Ambros, Peter F.; Reiter, Michael
2015-02-01
We propose a user-driven method for the segmentation of neuroblastoma nuclei in microscopic fluorescence images involving the gradient energy tensor. Multispectral fluorescence images contain intensity and spatial information about antigene expression, fluorescence in situ hybridization (FISH) signals and nucleus morphology. The latter serves as basis for the detection of single cells and the calculation of shape features, which are used to validate the segmentation and to reject false detections. Accurate segmentation is difficult due to varying staining intensities and aggregated cells. It requires several (meta-) parameters, which have a strong influence on the segmentation results and have to be selected carefully for each sample (or group of similar samples) by user interactions. Because our method is designed for clinicians and biologists, who may have only limited image processing background, an interactive parameter selection step allows the implicit tuning of parameter values. With this simple but intuitive method, segmentation results with high precision for a large number of cells can be achieved by minimal user interaction. The strategy was validated on handsegmented datasets of three neuroblastoma cell lines.
Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less
Automatic brain tissue segmentation based on graph filter.
Kong, Youyong; Chen, Xiaopeng; Wu, Jiasong; Zhang, Pinzheng; Chen, Yang; Shu, Huazhong
2018-05-09
Accurate segmentation of brain tissues from magnetic resonance imaging (MRI) is of significant importance in clinical applications and neuroscience research. Accurate segmentation is challenging due to the tissue heterogeneity, which is caused by noise, bias filed and partial volume effects. To overcome this limitation, this paper presents a novel algorithm for brain tissue segmentation based on supervoxel and graph filter. Firstly, an effective supervoxel method is employed to generate effective supervoxels for the 3D MRI image. Secondly, the supervoxels are classified into different types of tissues based on filtering of graph signals. The performance is evaluated on the BrainWeb 18 dataset and the Internet Brain Segmentation Repository (IBSR) 18 dataset. The proposed method achieves mean dice similarity coefficient (DSC) of 0.94, 0.92 and 0.90 for the segmentation of white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF) for BrainWeb 18 dataset, and mean DSC of 0.85, 0.87 and 0.57 for the segmentation of WM, GM and CSF for IBSR18 dataset. The proposed approach can well discriminate different types of brain tissues from the brain MRI image, which has high potential to be applied for clinical applications.
Fananapazir, Ghaneh; Bashir, Mustafa R; Marin, Daniele; Boll, Daniel T
2015-06-01
To evaluate the performance of a prototype, fully-automated post-processing solution for whole-liver and lobar segmentation based on MDCT datasets. A polymer liver phantom was used to assess accuracy of post-processing applications comparing phantom volumes determined via Archimedes' principle with MDCT segmented datasets. For the IRB-approved, HIPAA-compliant study, 25 patients were enrolled. Volumetry performance compared the manual approach with the automated prototype, assessing intraobserver variability, and interclass correlation for whole-organ and lobar segmentation using ANOVA comparison. Fidelity of segmentation was evaluated qualitatively. Phantom volume was 1581.0 ± 44.7 mL, manually segmented datasets estimated 1628.0 ± 47.8 mL, representing a mean overestimation of 3.0%, automatically segmented datasets estimated 1601.9 ± 0 mL, representing a mean overestimation of 1.3%. Whole-liver and segmental volumetry demonstrated no significant intraobserver variability for neither manual nor automated measurements. For whole-liver volumetry, automated measurement repetitions resulted in identical values; reproducible whole-organ volumetry was also achieved with manual segmentation, p(ANOVA) 0.98. For lobar volumetry, automated segmentation improved reproducibility over manual approach, without significant measurement differences for either methodology, p(ANOVA) 0.95-0.99. Whole-organ and lobar segmentation results from manual and automated segmentation showed no significant differences, p(ANOVA) 0.96-1.00. Assessment of segmentation fidelity found that segments I-IV/VI showed greater segmentation inaccuracies compared to the remaining right hepatic lobe segments. Automated whole-liver segmentation showed non-inferiority of fully-automated whole-liver segmentation compared to manual approaches with improved reproducibility and post-processing duration; automated dual-seed lobar segmentation showed slight tendencies for underestimating the right hepatic lobe volume and greater variability in edge detection for the left hepatic lobe compared to manual segmentation.
Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels.
Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R
2018-01-01
Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Wentz, Robert; Manduca, Armando; Fletcher, J. G.; Siddiki, Hassan; Shields, Raymond C.; Vrtiska, Terri; Spencer, Garrett; Primak, Andrew N.; Zhang, Jie; Nielson, Theresa; McCollough, Cynthia; Yu, Lifeng
2007-03-01
Purpose: To develop robust, novel segmentation and co-registration software to analyze temporally overlapping CT angiography datasets, with an aim to permit automated measurement of regional aortic pulsatility in patients with abdominal aortic aneurysms. Methods: We perform retrospective gated CT angiography in patients with abdominal aortic aneurysms. Multiple, temporally overlapping, time-resolved CT angiography datasets are reconstructed over the cardiac cycle, with aortic segmentation performed using a priori anatomic assumptions for the aorta and heart. Visual quality assessment is performed following automatic segmentation with manual editing. Following subsequent centerline generation, centerlines are cross-registered across phases, with internal validation of co-registration performed by examining registration at the regions of greatest diameter change (i.e. when the second derivative is maximal). Results: We have performed gated CT angiography in 60 patients. Automatic seed placement is successful in 79% of datasets, requiring either no editing (70%) or minimal editing (less than 1 minute; 12%). Causes of error include segmentation into adjacent, high-attenuating, nonvascular tissues; small segmentation errors associated with calcified plaque; and segmentation of non-renal, small paralumbar arteries. Internal validation of cross-registration demonstrates appropriate registration in our patient population. In general, we observed that aortic pulsatility can vary along the course of the abdominal aorta. Pulsation can also vary within an aneurysm as well as between aneurysms, but the clinical significance of these findings remain unknown. Conclusions: Visualization of large vessel pulsatility is possible using ECG-gated CT angiography, partial scan reconstruction, automatic segmentation, centerline generation, and coregistration of temporally resolved datasets.
An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection.
Putra, I Putu Edy Suardiyana; Brusey, James; Gaura, Elena; Vesilo, Rein
2017-12-22
The fixed-size non-overlapping sliding window (FNSW) and fixed-size overlapping sliding window (FOSW) approaches are the most commonly used data-segmentation techniques in machine learning-based fall detection using accelerometer sensors. However, these techniques do not segment by fall stages (pre-impact, impact, and post-impact) and thus useful information is lost, which may reduce the detection rate of the classifier. Aligning the segment with the fall stage is difficult, as the segment size varies. We propose an event-triggered machine learning (EvenT-ML) approach that aligns each fall stage so that the characteristic features of the fall stages are more easily recognized. To evaluate our approach, two publicly accessible datasets were used. Classification and regression tree (CART), k -nearest neighbor ( k -NN), logistic regression (LR), and the support vector machine (SVM) were used to train the classifiers. EvenT-ML gives classifier F-scores of 98% for a chest-worn sensor and 92% for a waist-worn sensor, and significantly reduces the computational cost compared with the FNSW- and FOSW-based approaches, with reductions of up to 8-fold and 78-fold, respectively. EvenT-ML achieves a significantly better F-score than existing fall detection approaches. These results indicate that aligning feature segments with fall stages significantly increases the detection rate and reduces the computational cost.
NASA Astrophysics Data System (ADS)
Aghaei, Faranak; Tan, Maxine; Hollingsworth, Alan B.; Zheng, Bin; Cheng, Samuel
2016-03-01
Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) has been used increasingly in breast cancer diagnosis and assessment of cancer treatment efficacy. In this study, we applied a computer-aided detection (CAD) scheme to automatically segment breast regions depicting on MR images and used the kinetic image features computed from the global breast MR images acquired before neoadjuvant chemotherapy to build a new quantitative model to predict response of the breast cancer patients to the chemotherapy. To assess performance and robustness of this new prediction model, an image dataset involving breast MR images acquired from 151 cancer patients before undergoing neoadjuvant chemotherapy was retrospectively assembled and used. Among them, 63 patients had "complete response" (CR) to chemotherapy in which the enhanced contrast levels inside the tumor volume (pre-treatment) was reduced to the level as the normal enhanced background parenchymal tissues (post-treatment), while 88 patients had "partially response" (PR) in which the high contrast enhancement remain in the tumor regions after treatment. We performed the studies to analyze the correlation among the 22 global kinetic image features and then select a set of 4 optimal features. Applying an artificial neural network trained with the fusion of these 4 kinetic image features, the prediction model yielded an area under ROC curve (AUC) of 0.83+/-0.04. This study demonstrated that by avoiding tumor segmentation, which is often difficult and unreliable, fusion of kinetic image features computed from global breast MR images without tumor segmentation can also generate a useful clinical marker in predicting efficacy of chemotherapy.
Within-brain classification for brain tumor segmentation.
Havaei, Mohammad; Larochelle, Hugo; Poulin, Philippe; Jodoin, Pierre-Marc
2016-05-01
In this paper, we investigate a framework for interactive brain tumor segmentation which, at its core, treats the problem of interactive brain tumor segmentation as a machine learning problem. This method has an advantage over typical machine learning methods for this task where generalization is made across brains. The problem with these methods is that they need to deal with intensity bias correction and other MRI-specific noise. In this paper, we avoid these issues by approaching the problem as one of within brain generalization. Specifically, we propose a semi-automatic method that segments a brain tumor by training and generalizing within that brain only, based on some minimum user interaction. We investigate how adding spatial feature coordinates (i.e., i, j, k) to the intensity features can significantly improve the performance of different classification methods such as SVM, kNN and random forests. This would only be possible within an interactive framework. We also investigate the use of a more appropriate kernel and the adaptation of hyper-parameters specifically for each brain. As a result of these experiments, we obtain an interactive method whose results reported on the MICCAI-BRATS 2013 dataset are the second most accurate compared to published methods, while using significantly less memory and processing power than most state-of-the-art methods.
Multi-channel acoustic recording and automated analysis of Drosophila courtship songs
2013-01-01
Background Drosophila melanogaster has served as a powerful model system for genetic studies of courtship songs. To accelerate research on the genetic and neural mechanisms underlying courtship song, we have developed a sensitive recording system to simultaneously capture the acoustic signals from 32 separate pairs of courting flies as well as software for automated segmentation of songs. Results Our novel hardware design enables recording of low amplitude sounds in most laboratory environments. We demonstrate the power of this system by collecting, segmenting and analyzing over 18 hours of courtship song from 75 males from five wild-type strains of Drosophila melanogaster. Our analysis reveals previously undetected modulation of courtship song features and extensive natural genetic variation for most components of courtship song. Despite having a large dataset with sufficient power to detect subtle modulations of song, we were unable to identify previously reported periodic rhythms in the inter-pulse interval of song. We provide detailed instructions for assembling the hardware and for using our open-source segmentation software. Conclusions Analysis of a large dataset of acoustic signals from Drosophila melanogaster provides novel insight into the structure and dynamics of species-specific courtship songs. Our new system for recording and analyzing fly acoustic signals should therefore greatly accelerate future studies of the genetics, neurobiology and evolution of courtship song. PMID:23369160
Fabric defect detection based on visual saliency using deep feature and low-rank recovery
NASA Astrophysics Data System (ADS)
Liu, Zhoufeng; Wang, Baorui; Li, Chunlei; Li, Bicao; Dong, Yan
2018-04-01
Fabric defect detection plays an important role in improving the quality of fabric product. In this paper, a novel fabric defect detection method based on visual saliency using deep feature and low-rank recovery was proposed. First, unsupervised training is carried out by the initial network parameters based on MNIST large datasets. The supervised fine-tuning of fabric image library based on Convolutional Neural Networks (CNNs) is implemented, and then more accurate deep neural network model is generated. Second, the fabric images are uniformly divided into the image block with the same size, then we extract their multi-layer deep features using the trained deep network. Thereafter, all the extracted features are concentrated into a feature matrix. Third, low-rank matrix recovery is adopted to divide the feature matrix into the low-rank matrix which indicates the background and the sparse matrix which indicates the salient defect. In the end, the iterative optimal threshold segmentation algorithm is utilized to segment the saliency maps generated by the sparse matrix to locate the fabric defect area. Experimental results demonstrate that the feature extracted by CNN is more suitable for characterizing the fabric texture than the traditional LBP, HOG and other hand-crafted features extraction method, and the proposed method can accurately detect the defect regions of various fabric defects, even for the image with complex texture.
NASA Astrophysics Data System (ADS)
You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.
2013-01-01
Pointers (arrows and symbols) are frequently used in biomedical images to highlight specific image regions of interest (ROIs) that are mentioned in figure captions and/or text discussion. Detection of pointers is the first step toward extracting relevant visual features from ROIs and combining them with textual descriptions for a multimodal (text and image) biomedical article retrieval system. Recently we developed a pointer recognition algorithm based on an edge-based pointer segmentation method, and subsequently reported improvements made on our initial approach involving the use of Active Shape Models (ASM) for pointer recognition and region growing-based method for pointer segmentation. These methods contributed to improving the recall of pointer recognition but not much to the precision. The method discussed in this article is our recent effort to improve the precision rate. Evaluation performed on two datasets and compared with other pointer segmentation methods show significantly improved precision and the highest F1 score.
NASA Astrophysics Data System (ADS)
Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke
2016-03-01
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.
A multiresolution prostate representation for automatic segmentation in magnetic resonance images.
Alvarez, Charlens; Martínez, Fabio; Romero, Eduardo
2017-04-01
Accurate prostate delineation is necessary in radiotherapy processes for concentrating the dose onto the prostate and reducing side effects in neighboring organs. Currently, manual delineation is performed over magnetic resonance imaging (MRI) taking advantage of its high soft tissue contrast property. Nevertheless, as human intervention is a consuming task with high intra- and interobserver variability rates, (semi)-automatic organ delineation tools have emerged to cope with these challenges, reducing the time spent for these tasks. This work presents a multiresolution representation that defines a novel metric and allows to segment a new prostate by combining a set of most similar prostates in a dataset. The proposed method starts by selecting the set of most similar prostates with respect to a new one using the proposed multiresolution representation. This representation characterizes the prostate through a set of salient points, extracted from a region of interest (ROI) that encloses the organ and refined using structural information, allowing to capture main relevant features of the organ boundary. Afterward, the new prostate is automatically segmented by combining the nonrigidly registered expert delineations associated to the previous selected similar prostates using a weighted patch-based strategy. Finally, the prostate contour is smoothed based on morphological operations. The proposed approach was evaluated with respect to the expert manual segmentation under a leave-one-out scheme using two public datasets, obtaining averaged Dice coefficients of 82% ± 0.07 and 83% ± 0.06, and demonstrating a competitive performance with respect to atlas-based state-of-the-art methods. The proposed multiresolution representation provides a feature space that follows a local salient point criteria and a global rule of the spatial configuration among these points to find out the most similar prostates. This strategy suggests an easy adaptation in the clinical routine, as supporting tool for annotation. © 2017 American Association of Physicists in Medicine.
Automatic liver volume segmentation and fibrosis classification
NASA Astrophysics Data System (ADS)
Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit
2018-02-01
In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.
Classification of melanoma lesions using sparse coded features and random forests
NASA Astrophysics Data System (ADS)
Rastgoo, Mojdeh; Lemaître, Guillaume; Morel, Olivier; Massich, Joan; Garcia, Rafael; Meriaudeau, Fabrice; Marzani, Franck; Sidibé, Désiré
2016-03-01
Malignant melanoma is the most dangerous type of skin cancer, yet it is the most treatable kind of cancer, conditioned by its early diagnosis which is a challenging task for clinicians and dermatologists. In this regard, CAD systems based on machine learning and image processing techniques are developed to differentiate melanoma lesions from benign and dysplastic nevi using dermoscopic images. Generally, these frameworks are composed of sequential processes: pre-processing, segmentation, and classification. This architecture faces mainly two challenges: (i) each process is complex with the need to tune a set of parameters, and is specific to a given dataset; (ii) the performance of each process depends on the previous one, and the errors are accumulated throughout the framework. In this paper, we propose a framework for melanoma classification based on sparse coding which does not rely on any pre-processing or lesion segmentation. Our framework uses Random Forests classifier and sparse representation of three features: SIFT, Hue and Opponent angle histograms, and RGB intensities. The experiments are carried out on the public PH2 dataset using a 10-fold cross-validation. The results show that SIFT sparse-coded feature achieves the highest performance with sensitivity and specificity of 100% and 90.3% respectively, with a dictionary size of 800 atoms and a sparsity level of 2. Furthermore, the descriptor based on RGB intensities achieves similar results with sensitivity and specificity of 100% and 71.3%, respectively for a smaller dictionary size of 100 atoms. In conclusion, dictionary learning techniques encode strong structures of dermoscopic images and provide discriminant descriptors.
NASA Astrophysics Data System (ADS)
Qiu, Yuchen; Yan, Shiju; Tan, Maxine; Cheng, Samuel; Liu, Hong; Zheng, Bin
2016-03-01
Although mammography is the only clinically acceptable imaging modality used in the population-based breast cancer screening, its efficacy is quite controversy. One of the major challenges is how to help radiologists more accurately classify between benign and malignant lesions. The purpose of this study is to investigate a new mammographic mass classification scheme based on a deep learning method. In this study, we used an image dataset involving 560 regions of interest (ROIs) extracted from digital mammograms, which includes 280 malignant and 280 benign mass ROIs, respectively. An eight layer deep learning network was applied, which employs three pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perception (MLP) classifier for feature categorization. In order to improve robustness of selected features, each convolution layer is connected with a max-pooling layer. A number of 20, 10, and 5 feature maps were utilized for the 1st, 2nd and 3rd convolution layer, respectively. The convolution networks are followed by a MLP classifier, which generates a classification score to predict likelihood of a ROI depicting a malignant mass. Among 560 ROIs, 420 ROIs were used as a training dataset and the remaining 140 ROIs were used as a validation dataset. The result shows that the new deep learning based classifier yielded an area under the receiver operation characteristic curve (AUC) of 0.810+/-0.036. This study demonstrated the potential superiority of using a deep learning based classifier to distinguish malignant and benign breast masses without segmenting the lesions and extracting the pre-defined image features.
Deep Filter Banks for Texture Recognition, Description, and Segmentation.
Cimpoi, Mircea; Maji, Subhransu; Kokkinos, Iasonas; Vedaldi, Andrea
Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.
Ship detection in panchromatic images: a new method and its DSP implementation
NASA Astrophysics Data System (ADS)
Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Wang, Mengfei; Meng, Gang
2016-03-01
In this paper, a new ship detection method is proposed after analyzing the characteristics of panchromatic remote sensing images and ship targets. Firstly, AdaBoost(Adaptive Boosting) classifiers trained by Haar features are utilized to make coarse detection of ship targets. Then LSD (Line Segment Detector) is adopted to extract the line features in target slices to make fine detection. Experimental results on a dataset of panchromatic remote sensing images with a spatial resolution of 2m show that the proposed algorithm can achieve high detection rate and low false alarm rate. Meanwhile, the algorithm can meet the needs of practical applications on DSP (Digital Signal Processor).
Exploring 3D Human Action Recognition: from Offline to Online
Li, Rui; Liu, Zhenyu; Tan, Jianrong
2018-01-01
With the introduction of cost-effective depth sensors, a tremendous amount of research has been devoted to studying human action recognition using 3D motion data. However, most existing methods work in an offline fashion, i.e., they operate on a segmented sequence. There are a few methods specifically designed for online action recognition, which continually predicts action labels as a stream sequence proceeds. In view of this fact, we propose a question: can we draw inspirations and borrow techniques or descriptors from existing offline methods, and then apply these to online action recognition? Note that extending offline techniques or descriptors to online applications is not straightforward, since at least two problems—including real-time performance and sequence segmentation—are usually not considered in offline action recognition. In this paper, we give a positive answer to the question. To develop applicable online action recognition methods, we carefully explore feature extraction, sequence segmentation, computational costs, and classifier selection. The effectiveness of the developed methods is validated on the MSR 3D Online Action dataset and the MSR Daily Activity 3D dataset. PMID:29461502
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning.
Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li
2016-06-07
Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer.
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning
Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li
2016-01-01
Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer. PMID:27273294
NASA Astrophysics Data System (ADS)
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart
2015-02-01
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L; Beauchemin, Steven S; Rodrigues, George; Gaede, Stewart
2015-02-21
This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51 ± 1.92) to (97.27 ± 0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.
Automatic co-segmentation of lung tumor based on random forest in PET-CT images
NASA Astrophysics Data System (ADS)
Jiang, Xueqing; Xiang, Dehui; Zhang, Bin; Zhu, Weifang; Shi, Fei; Chen, Xinjian
2016-03-01
In this paper, a fully automatic method is proposed to segment the lung tumor in clinical 3D PET-CT images. The proposed method effectively combines PET and CT information to make full use of the high contrast of PET images and superior spatial resolution of CT images. Our approach consists of three main parts: (1) initial segmentation, in which spines are removed in CT images and initial connected regions achieved by thresholding based segmentation in PET images; (2) coarse segmentation, in which monotonic downhill function is applied to rule out structures which have similar standardized uptake values (SUV) to the lung tumor but do not satisfy a monotonic property in PET images; (3) fine segmentation, random forests method is applied to accurately segment the lung tumor by extracting effective features from PET and CT images simultaneously. We validated our algorithm on a dataset which consists of 24 3D PET-CT images from different patients with non-small cell lung cancer (NSCLC). The average TPVF, FPVF and accuracy rate (ACC) were 83.65%, 0.05% and 99.93%, respectively. The correlation analysis shows our segmented lung tumor volumes has strong correlation ( average 0.985) with the ground truth 1 and ground truth 2 labeled by a clinical expert.
Mammogram segmentation using maximal cell strength updation in cellular automata.
Anitha, J; Peter, J Dinesh
2015-08-01
Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.
Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels
Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V.; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R.
2018-01-01
Background: Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. Methods: In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. Results: The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. Conclusions: The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods. PMID:29619277
Glioma grading using cell nuclei morphologic features in digital pathology images
NASA Astrophysics Data System (ADS)
Reza, Syed M. S.; Iftekharuddin, Khan M.
2016-03-01
This work proposes a computationally efficient cell nuclei morphologic feature analysis technique to characterize the brain gliomas in tissue slide images. In this work, our contributions are two-fold: 1) obtain an optimized cell nuclei segmentation method based on the pros and cons of the existing techniques in literature, 2) extract representative features by k-mean clustering of nuclei morphologic features to include area, perimeter, eccentricity, and major axis length. This clustering based representative feature extraction avoids shortcomings of extensive tile [1] [2] and nuclear score [3] based methods for brain glioma grading in pathology images. Multilayer perceptron (MLP) is used to classify extracted features into two tumor types: glioblastoma multiforme (GBM) and low grade glioma (LGG). Quantitative scores such as precision, recall, and accuracy are obtained using 66 clinical patients' images from The Cancer Genome Atlas (TCGA) [4] dataset. On an average ~94% accuracy from 10 fold crossvalidation confirms the efficacy of the proposed method.
Comparison of k-means related clustering methods for nuclear medicine images segmentation
NASA Astrophysics Data System (ADS)
Borys, Damian; Bzowski, Pawel; Danch-Wierzchowska, Marta; Psiuk-Maksymowicz, Krzysztof
2017-03-01
In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.
Background feature descriptor for offline handwritten numeral recognition
NASA Astrophysics Data System (ADS)
Ming, Delie; Wang, Hao; Tian, Tian; Jie, Feiran; Lei, Bo
2011-11-01
This paper puts forward an offline handwritten numeral recognition method based on background structural descriptor (sixteen-value numerical background expression). Through encoding the background pixels in the image according to a certain rule, 16 different eigenvalues were generated, which reflected the background condition of every digit, then reflected the structural features of the digits. Through pattern language description of images by these features, automatic segmentation of overlapping digits and numeral recognition can be realized. This method is characterized by great deformation resistant ability, high recognition speed and easy realization. Finally, the experimental results and conclusions are presented. The experimental results of recognizing datasets from various practical application fields reflect that with this method, a good recognition effect can be achieved.
Multi-channel MRI segmentation of eye structures and tumors using patient-specific features
Ciller, Carlos; De Zanet, Sandro; Kamnitsas, Konstantinos; Maeder, Philippe; Glocker, Ben; Munier, Francis L.; Rueckert, Daniel; Thiran, Jean-Philippe
2017-01-01
Retinoblastoma and uveal melanoma are fast spreading eye tumors usually diagnosed by using 2D Fundus Image Photography (Fundus) and 2D Ultrasound (US). Diagnosis and treatment planning of such diseases often require additional complementary imaging to confirm the tumor extend via 3D Magnetic Resonance Imaging (MRI). In this context, having automatic segmentations to estimate the size and the distribution of the pathological tissue would be advantageous towards tumor characterization. Until now, the alternative has been the manual delineation of eye structures, a rather time consuming and error-prone task, to be conducted in multiple MRI sequences simultaneously. This situation, and the lack of tools for accurate eye MRI analysis, reduces the interest in MRI beyond the qualitative evaluation of the optic nerve invasion and the confirmation of recurrent malignancies below calcified tumors. In this manuscript, we propose a new framework for the automatic segmentation of eye structures and ocular tumors in multi-sequence MRI. Our key contribution is the introduction of a pathological eye model from which Eye Patient-Specific Features (EPSF) can be computed. These features combine intensity and shape information of pathological tissue while embedded in healthy structures of the eye. We assess our work on a dataset of pathological patient eyes by computing the Dice Similarity Coefficient (DSC) of the sclera, the cornea, the vitreous humor, the lens and the tumor. In addition, we quantitatively show the superior performance of our pathological eye model as compared to the segmentation obtained by using a healthy model (over 4% DSC) and demonstrate the relevance of our EPSF, which improve the final segmentation regardless of the classifier employed. PMID:28350816
NASA Astrophysics Data System (ADS)
Meijs, M.; Debats, O.; Huisman, H.
2015-03-01
In prostate cancer, the detection of metastatic lymph nodes indicates progression from localized disease to metastasized cancer. The detection of positive lymph nodes is, however, a complex and time consuming task for experienced radiologists. Assistance of a two-stage Computer-Aided Detection (CAD) system in MR Lymphography (MRL) is not yet feasible due to the large number of false positives in the first stage of the system. By introducing a multi-structure, multi-atlas segmentation, using an affine transformation followed by a B-spline transformation for registration, the organ location is given by a mean density probability map. The atlas segmentation is semi-automatically drawn with ITK-SNAP, using Active Contour Segmentation. Each anatomic structure is identified by a label number. Registration is performed using Elastix, using Mutual Information and an Adaptive Stochastic Gradient optimization. The dataset consists of the MRL scans of ten patients, with lymph nodes manually annotated in consensus by two expert readers. The feature map of the CAD system consists of the Multi-Atlas and various other features (e.g. Normalized Intensity and multi-scale Blobness). The voxel-based Gentleboost classifier is evaluated using ROC analysis with cross validation. We show in a set of 10 studies that adding multi-structure, multi-atlas anatomical structure likelihood features improves the quality of the lymph node voxel likelihood map. Multiple structure anatomy maps may thus make MRL CAD more feasible.
CT-based manual segmentation and evaluation of paranasal sinuses.
Pirner, S; Tingelhoff, K; Wagner, I; Westphal, R; Rilk, M; Wahl, F M; Bootz, F; Eichhorn, Klaus W G
2009-04-01
Manual segmentation of computed tomography (CT) datasets was performed for robot-assisted endoscope movement during functional endoscopic sinus surgery (FESS). Segmented 3D models are needed for the robots' workspace definition. A total of 50 preselected CT datasets were each segmented in 150-200 coronal slices with 24 landmarks being set. Three different colors for segmentation represent diverse risk areas. Extension and volumetric measurements were performed. Three-dimensional reconstruction was generated after segmentation. Manual segmentation took 8-10 h for each CT dataset. The mean volumes were: right maxillary sinus 17.4 cm(3), left side 17.9 cm(3), right frontal sinus 4.2 cm(3), left side 4.0 cm(3), total frontal sinuses 7.9 cm(3), sphenoid sinus right side 5.3 cm(3), left side 5.5 cm(3), total sphenoid sinus volume 11.2 cm(3). Our manually segmented 3D-models present the patient's individual anatomy with a special focus on structures in danger according to the diverse colored risk areas. For safe robot assistance, the high-accuracy models represent an average of the population for anatomical variations, extension and volumetric measurements. They can be used as a database for automatic model-based segmentation. None of the segmentation methods so far described provide risk segmentation. The robot's maximum distance to the segmented border can be adjusted according to the differently colored areas.
Optimal reinforcement of training datasets in semi-supervised landmark-based segmentation
NASA Astrophysics Data System (ADS)
Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž
2015-03-01
During the last couple of decades, the development of computerized image segmentation shifted from unsupervised to supervised methods, which made segmentation results more accurate and robust. However, the main disadvantage of supervised segmentation is a need for manual image annotation that is time-consuming and subjected to human error. To reduce the need for manual annotation, we propose a novel learning approach for training dataset reinforcement in the area of landmark-based segmentation, where newly detected landmarks are optimally combined with reference landmarks from the training dataset and therefore enriches the training process. The approach is formulated as a nonlinear optimization problem, where the solution is a vector of weighting factors that measures how reliable are the detected landmarks. The detected landmarks that are found to be more reliable are included into the training procedure with higher weighting factors, whereas the detected landmarks that are found to be less reliable are included with lower weighting factors. The approach is integrated into the landmark-based game-theoretic segmentation framework and validated against the problem of lung field segmentation from chest radiographs.
Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data
NASA Astrophysics Data System (ADS)
Parida, G.; Rajan, K. S.
2017-05-01
The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.
NASA Astrophysics Data System (ADS)
Ravnik, Domen; Jerman, Tim; Pernuš, Franjo; Likar, Boštjan; Å piclin, Žiga
2018-03-01
Performance of a convolutional neural network (CNN) based white-matter lesion segmentation in magnetic resonance (MR) brain images was evaluated under various conditions involving different levels of image preprocessing and augmentation applied and different compositions of the training dataset. On images of sixty multiple sclerosis patients, half acquired on one and half on another scanner of different vendor, we first created a highly accurate multi-rater consensus based lesion segmentations, which were used in several experiments to evaluate the CNN segmentation result. First, the CNN was trained and tested without preprocessing the images and by using various combinations of preprocessing techniques, namely histogram-based intensity standardization, normalization by whitening, and train dataset augmentation by flipping the images across the midsagittal plane. Then, the CNN was trained and tested on images of the same, different or interleaved scanner datasets using a cross-validation approach. The results indicate that image preprocessing has little impact on performance in a same-scanner situation, while between-scanner performance benefits most from intensity standardization and normalization, but also further by incorporating heterogeneous multi-scanner datasets in the training phase. Under such conditions the between-scanner performance of the CNN approaches that of the ideal situation, when the CNN is trained and tested on the same scanner dataset.
SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects
2014-01-01
Background Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems. PMID:24964954
SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects.
Kloster, Michael; Kauer, Gerhard; Beszteri, Bánk
2014-06-25
Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems.
Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid
2016-05-01
Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gummeson, Anna; Arvidsson, Ida; Ohlsson, Mattias; Overgaard, Niels C.; Krzyzanowska, Agnieszka; Heyden, Anders; Bjartell, Anders; Aström, Kalle
2017-03-01
Prostate cancer is the most diagnosed cancer in men. The diagnosis is confirmed by pathologists based on ocular inspection of prostate biopsies in order to classify them according to Gleason score. The main goal of this paper is to automate the classification using convolutional neural networks (CNNs). The introduction of CNNs has broadened the field of pattern recognition. It replaces the classical way of designing and extracting hand-made features used for classification with the substantially different strategy of letting the computer itself decide which features are of importance. For automated prostate cancer classification into the classes: Benign, Gleason grade 3, 4 and 5 we propose a CNN with small convolutional filters that has been trained from scratch using stochastic gradient descent with momentum. The input consists of microscopic images of haematoxylin and eosin stained tissue, the output is a coarse segmentation into regions of the four different classes. The dataset used consists of 213 images, each considered to be of one class only. Using four-fold cross-validation we obtained an error rate of 7.3%, which is significantly better than previous state of the art using the same dataset. Although the dataset was rather small, good results were obtained. From this we conclude that CNN is a promising method for this problem. Future work includes obtaining a larger dataset, which potentially could diminish the error margin.
Semantic Image Segmentation with Contextual Hierarchical Models.
Seyedhosseini, Mojtaba; Tasdizen, Tolga
2016-05-01
Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).
Moghbel, Mehrdad; Mashohor, Syamsiah; Mahmud, Rozi; Saripan, M. Iqbal Bin
2016-01-01
Segmentation of liver tumors from Computed Tomography (CT) and tumor burden analysis play an important role in the choice of therapeutic strategies for liver diseases and treatment monitoring. In this paper, a new segmentation method for liver tumors from contrast-enhanced CT imaging is proposed. As manual segmentation of tumors for liver treatment planning is both labor intensive and time-consuming, a highly accurate automatic tumor segmentation is desired. The proposed framework is fully automatic requiring no user interaction. The proposed segmentation evaluated on real-world clinical data from patients is based on a hybrid method integrating cuckoo optimization and fuzzy c-means algorithm with random walkers algorithm. The accuracy of the proposed method was validated using a clinical liver dataset containing one of the highest numbers of tumors utilized for liver tumor segmentation containing 127 tumors in total with further validation of the results by a consultant radiologist. The proposed method was able to achieve one of the highest accuracies reported in the literature for liver tumor segmentation compared to other segmentation methods with a mean overlap error of 22.78 % and dice similarity coefficient of 0.75 in 3Dircadb dataset and a mean overlap error of 15.61 % and dice similarity coefficient of 0.81 in MIDAS dataset. The proposed method was able to outperform most other tumor segmentation methods reported in the literature while representing an overlap error improvement of 6 % compared to one of the best performing automatic methods in the literature. The proposed framework was able to provide consistently accurate results considering the number of tumors and the variations in tumor contrast enhancements and tumor appearances while the tumor burden was estimated with a mean error of 0.84 % in 3Dircadb dataset. PMID:27540353
NASA Astrophysics Data System (ADS)
Nuzhnaya, Tatyana; Bakic, Predrag; Kontos, Despina; Megalooikonomou, Vasileios; Ling, Haibin
2012-02-01
This work is a part of our ongoing study aimed at understanding a relation between the topology of anatomical branching structures with the underlying image texture. Morphological variability of the breast ductal network is associated with subsequent development of abnormalities in patients with nipple discharge such as papilloma, breast cancer and atypia. In this work, we investigate complex dependence among ductal components to perform segmentation, the first step for analyzing topology of ductal lobes. Our automated framework is based on incorporating a conditional random field with texture descriptors of skewness, coarseness, contrast, energy and fractal dimension. These features are selected to capture the architectural variability of the enhanced ducts by encoding spatial variations between pixel patches in galactographic image. The segmentation algorithm was applied to a dataset of 20 x-ray galactograms obtained at the Hospital of the University of Pennsylvania. We compared the performance of the proposed approach with fully and semi automated segmentation algorithms based on neural network classification, fuzzy-connectedness, vesselness filter and graph cuts. Global consistency error and confusion matrix analysis were used as accuracy measurements. For the proposed approach, the true positive rate was higher and the false negative rate was significantly lower compared to other fully automated methods. This indicates that segmentation based on CRF incorporated with texture descriptors has potential to efficiently support the analysis of complex topology of the ducts and aid in development of realistic breast anatomy phantoms.
Bonte, Stijn; Goethals, Ingeborg; Van Holen, Roel
2018-05-07
Brain tumour segmentation in medical images is a very challenging task due to the large variety in tumour shape, position, appearance, scanning modalities and scanning parameters. Most existing segmentation algorithms use information from four different MRI-sequences, but since this is often not available, there is need for a method able to delineate the different tumour tissues based on a minimal amount of data. We present a novel approach using a Random Forests model combining voxelwise texture and abnormality features on a contrast-enhanced T1 and FLAIR MRI. We transform the two scans into 275 feature maps. A random forest model next calculates the probability to belong to 4 tumour classes or 5 normal classes. Afterwards, a dedicated voxel clustering algorithm provides the final tumour segmentation. We trained our method on the BraTS 2013 database and validated it on the larger BraTS 2017 dataset. We achieve median Dice scores of 40.9% (low-grade glioma) and 75.0% (high-grade glioma) to delineate the active tumour, and 68.4%/80.1% for the total abnormal region including edema. Our fully automated brain tumour segmentation algorithm is able to delineate contrast enhancing tissue and oedema with high accuracy based only on post-contrast T1-weighted and FLAIR MRI, whereas for non-enhancing tumour tissue and necrosis only moderate results are obtained. This makes the method especially suitable for high-grade glioma. Copyright © 2018 Elsevier Ltd. All rights reserved.
Messay, Temesguen; Hardie, Russell C; Tuinstra, Timothy R
2015-05-01
We present new pulmonary nodule segmentation algorithms for computed tomography (CT). These include a fully-automated (FA) system, a semi-automated (SA) system, and a hybrid system. Like most traditional systems, the new FA system requires only a single user-supplied cue point. On the other hand, the SA system represents a new algorithm class requiring 8 user-supplied control points. This does increase the burden on the user, but we show that the resulting system is highly robust and can handle a variety of challenging cases. The proposed hybrid system starts with the FA system. If improved segmentation results are needed, the SA system is then deployed. The FA segmentation engine has 2 free parameters, and the SA system has 3. These parameters are adaptively determined for each nodule in a search process guided by a regression neural network (RNN). The RNN uses a number of features computed for each candidate segmentation. We train and test our systems using the new Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) data. To the best of our knowledge, this is one of the first nodule-specific performance benchmarks using the new LIDC-IDRI dataset. We also compare the performance of the proposed methods with several previously reported results on the same data used by those other methods. Our results suggest that the proposed FA system improves upon the state-of-the-art, and the SA system offers a considerable boost over the FA system. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Novelli, Antonio; Aguilar, Manuel A.; Nemmaoui, Abderrahim; Aguilar, Fernando J.; Tarantino, Eufemia
2016-10-01
This paper shows the first comparison between data from Sentinel-2 (S2) Multi Spectral Instrument (MSI) and Landsat 8 (L8) Operational Land Imager (OLI) headed up to greenhouse detection. Two closely related in time scenes, one for each sensor, were classified by using Object Based Image Analysis and Random Forest (RF). The RF input consisted of several object-based features computed from spectral bands and including mean values, spectral indices and textural features. S2 and L8 data comparisons were also extended using a common segmentation dataset extracted form VHR World-View 2 (WV2) imagery to test differences only due to their specific spectral contribution. The best band combinations to perform segmentation were found through a modified version of the Euclidian Distance 2 index. Four different RF classifications schemes were considered achieving 89.1%, 91.3%, 90.9% and 93.4% as the best overall accuracies respectively, evaluated over the whole study area.
Convolutional networks for vehicle track segmentation
NASA Astrophysics Data System (ADS)
Quach, Tu-Thach
2017-10-01
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple and fast models to label track pixels. These models, however, are unable to capture natural track features, such as continuity and parallelism. More powerful but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3×3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate in low power and have limited training data. As a result, we aim for small and efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our six-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.
Kalpathy-Cramer, Jayashree; Zhao, Binsheng; Goldgof, Dmitry; Gu, Yuhua; Wang, Xingwei; Yang, Hao; Tan, Yongqiang; Gillies, Robert; Napel, Sandy
2016-08-01
Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 μl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.
NASA Astrophysics Data System (ADS)
Hopp, T.; Zapf, M.; Ruiter, N. V.
2014-03-01
An essential processing step for comparison of Ultrasound Computer Tomography images to other modalities, as well as for the use in further image processing, is to segment the breast from the background. In this work we present a (semi-) automated 3D segmentation method which is based on the detection of the breast boundary in coronal slice images and a subsequent surface fitting. The method was evaluated using a software phantom and in-vivo data. The fully automatically processed phantom results showed that a segmentation of approx. 10% of the slices of a dataset is sufficient to recover the overall breast shape. Application to 16 in-vivo datasets was performed successfully using semi-automated processing, i.e. using a graphical user interface for manual corrections of the automated breast boundary detection. The processing time for the segmentation of an in-vivo dataset could be significantly reduced by a factor of four compared to a fully manual segmentation. Comparison to manually segmented images identified a smoother surface for the semi-automated segmentation with an average of 11% of differing voxels and an average surface deviation of 2mm. Limitations of the edge detection may be overcome by future updates of the KIT USCT system, allowing a fully-automated usage of our segmentation approach.
Multi-dimension feature fusion for action recognition
NASA Astrophysics Data System (ADS)
Dong, Pei; Li, Jie; Dong, Junyu; Qi, Lin
2018-04-01
Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. The challenge for action recognition is to capture and fuse the multi-dimension information in video data. In order to take into account these characteristics simultaneously, we present a novel method that fuses multiple dimensional features, such as chromatic images, depth and optical flow fields. We built our model based on the multi-stream deep convolutional networks with the help of temporal segment networks and extract discriminative spatial and temporal features by fusing ConvNets towers multi-dimension, in which different feature weights are assigned in order to take full advantage of this multi-dimension information. Our architecture is trained and evaluated on the currently largest and most challenging benchmark NTU RGB-D dataset. The experiments demonstrate that the performance of our method outperforms the state-of-the-art methods.
Fluid Lensing based Machine Learning for Augmenting Earth Science Coral Datasets
NASA Astrophysics Data System (ADS)
Li, A.; Instrella, R.; Chirayath, V.
2016-12-01
Recently, there has been increased interest in monitoring the effects of climate change upon the world's marine ecosystems, particularly coral reefs. These delicate ecosystems are especially threatened due to their sensitivity to ocean warming and acidification, leading to unprecedented levels of coral bleaching and die-off in recent years. However, current global aquatic remote sensing datasets are unable to quantify changes in marine ecosystems at spatial and temporal scales relevant to their growth. In this project, we employ various supervised and unsupervised machine learning algorithms to augment existing datasets from NASA's Earth Observing System (EOS), using high resolution airborne imagery. This method utilizes NASA's ongoing airborne campaigns as well as its spaceborne assets to collect remote sensing data over these afflicted regions, and employs Fluid Lensing algorithms to resolve optical distortions caused by the fluid surface, producing cm-scale resolution imagery of these diverse ecosystems from airborne platforms. Support Vector Machines (SVMs) and K-mean clustering methods were applied to satellite imagery at 0.5m resolution, producing segmented maps classifying coral based on percent cover and morphology. Compared to a previous study using multidimensional maximum a posteriori (MAP) estimation to separate these features in high resolution airborne datasets, SVMs are able to achieve above 75% accuracy when augmented with existing MAP estimates, while unsupervised methods such as K-means achieve roughly 68% accuracy, verified by manually segmented reference data provided by a marine biologist. This effort thus has broad applications for coastal remote sensing, by helping marine biologists quantify behavioral trends spanning large areas and over longer timescales, and to assess the health of coral reefs worldwide.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2017-04-01
With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.
Shot boundary detection and label propagation for spatio-temporal video segmentation
NASA Astrophysics Data System (ADS)
Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David
2015-02-01
This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.
Qiu, Yuchen; Yan, Shiju; Gundreddy, Rohith Reddy; Wang, Yunzhi; Cheng, Samuel; Liu, Hong; Zheng, Bin
2017-01-01
PURPOSE To develop and test a deep learning based computer-aided diagnosis (CAD) scheme of mammograms for classifying between malignant and benign masses. METHODS An image dataset involving 560 regions of interest (ROIs) extracted from digital mammograms was used. After down-sampling each ROI from 512×512 to 64×64 pixel size, we applied an 8 layer deep learning network that involves 3 pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perceptron (MLP) classifier for feature categorization to process ROIs. The 3 pairs of convolution layers contain 20, 10, and 5 feature maps, respectively. Each convolution layer is connected with a max-pooling layer to improve the feature robustness. The output of the sixth layer is fully connected with a MLP classifier, which is composed of one hidden layer and one logistic regression layer. The network then generates a classification score to predict the likelihood of ROI depicting a malignant mass. A four-fold cross validation method was applied to train and test this deep learning network. RESULTS The results revealed that this CAD scheme yields an area under the receiver operation characteristic curve (AUC) of 0.696±0.044, 0.802±0.037, 0.836±0.036, and 0.822±0.035 for fold 1 to 4 testing datasets, respectively. The overall AUC of the entire dataset is 0.790±0.019. CONCLUSIONS This study demonstrates the feasibility of applying a deep learning based CAD scheme to classify between malignant and benign breast masses without a lesion segmentation, image feature computation and selection process. PMID:28436410
Qiu, Yuchen; Yan, Shiju; Gundreddy, Rohith Reddy; Wang, Yunzhi; Cheng, Samuel; Liu, Hong; Zheng, Bin
2017-01-01
To develop and test a deep learning based computer-aided diagnosis (CAD) scheme of mammograms for classifying between malignant and benign masses. An image dataset involving 560 regions of interest (ROIs) extracted from digital mammograms was used. After down-sampling each ROI from 512×512 to 64×64 pixel size, we applied an 8 layer deep learning network that involves 3 pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perceptron (MLP) classifier for feature categorization to process ROIs. The 3 pairs of convolution layers contain 20, 10, and 5 feature maps, respectively. Each convolution layer is connected with a max-pooling layer to improve the feature robustness. The output of the sixth layer is fully connected with a MLP classifier, which is composed of one hidden layer and one logistic regression layer. The network then generates a classification score to predict the likelihood of ROI depicting a malignant mass. A four-fold cross validation method was applied to train and test this deep learning network. The results revealed that this CAD scheme yields an area under the receiver operation characteristic curve (AUC) of 0.696±0.044, 0.802±0.037, 0.836±0.036, and 0.822±0.035 for fold 1 to 4 testing datasets, respectively. The overall AUC of the entire dataset is 0.790±0.019. This study demonstrates the feasibility of applying a deep learning based CAD scheme to classify between malignant and benign breast masses without a lesion segmentation, image feature computation and selection process.
Large scale validation of the M5L lung CAD on heterogeneous CT datasets.
Torres, E Lopez; Fiorina, E; Pennazio, F; Peroni, C; Saletta, M; Camarlinghi, N; Fantacci, M E; Cerello, P
2015-04-01
M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number of features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs.
3D geometric split-merge segmentation of brain MRI datasets.
Marras, Ioannis; Nikolaidis, Nikolaos; Pitas, Ioannis
2014-05-01
In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Elleithy, Khaled; Elleithy, Abdelrahman
2018-01-01
Eye exam can be as efficacious as physical one in determining health concerns. Retina screening can be the very first clue for detecting a variety of hidden health issues including pre-diabetes and diabetes. Through the process of clinical diagnosis and prognosis; ophthalmologists rely heavily on the binary segmented version of retina fundus image; where the accuracy of segmented vessels, optic disc, and abnormal lesions extremely affects the diagnosis accuracy which in turn affect the subsequent clinical treatment steps. This paper proposes an automated retinal fundus image segmentation system composed of three segmentation subsystems follow same core segmentation algorithm. Despite of broad difference in features and characteristics; retinal vessels, optic disc, and exudate lesions are extracted by each subsystem without the need for texture analysis or synthesis. For sake of compact diagnosis and complete clinical insight, our proposed system can detect these anatomical structures in one session with high accuracy even in pathological retina images. The proposed system uses a robust hybrid segmentation algorithm combines adaptive fuzzy thresholding and mathematical morphology. The proposed system is validated using four benchmark datasets: DRIVE and STARE (vessels), DRISHTI-GS (optic disc), and DIARETDB1 (exudates lesions). Competitive segmentation performance is achieved, outperforming a variety of up-to-date systems and demonstrating the capacity to deal with other heterogeneous anatomical structures. PMID:29888146
Wahab, Noorul; Khan, Asifullah; Lee, Yeon Soo
2017-06-01
Different types of breast cancer are affecting lives of women across the world. Common types include Ductal carcinoma in situ (DCIS), Invasive ductal carcinoma (IDC), Tubular carcinoma, Medullary carcinoma, and Invasive lobular carcinoma (ILC). While detecting cancer, one important factor is mitotic count - showing how rapidly the cells are dividing. But the class imbalance problem, due to the small number of mitotic nuclei in comparison to the overwhelming number of non-mitotic nuclei, affects the performance of classification models. This work presents a two-phase model to mitigate the class biasness issue while classifying mitotic and non-mitotic nuclei in breast cancer histopathology images through a deep convolutional neural network (CNN). First, nuclei are segmented out using blue ratio and global binary thresholding. In Phase-1 a CNN is then trained on the segmented out 80×80 pixel patches based on a standard dataset. Hard non-mitotic examples are identified and augmented; mitotic examples are oversampled by rotation and flipping; whereas non-mitotic examples are undersampled by blue ratio histogram based k-means clustering. Based on this information from Phase-1, the dataset is modified for Phase-2 in order to reduce the effects of class imbalance. The proposed CNN architecture and data balancing technique yielded an F-measure of 0.79, and outperformed all the methods relying on specific handcrafted features, as well as those using a combination of handcrafted and CNN-generated features. Copyright © 2017 Elsevier Ltd. All rights reserved.
Holistic approach for automated background EEG assessment in asphyxiated full-term infants
NASA Astrophysics Data System (ADS)
Matic, Vladimir; Cherian, Perumpillichira J.; Koolen, Ninah; Naulaers, Gunnar; Swarte, Renate M.; Govaert, Paul; Van Huffel, Sabine; De Vos, Maarten
2014-12-01
Objective. To develop an automated algorithm to quantify background EEG abnormalities in full-term neonates with hypoxic ischemic encephalopathy. Approach. The algorithm classifies 1 h of continuous neonatal EEG (cEEG) into a mild, moderate or severe background abnormality grade. These classes are well established in the literature and a clinical neurophysiologist labeled 272 1 h cEEG epochs selected from 34 neonates. The algorithm is based on adaptive EEG segmentation and mapping of the segments into the so-called segments’ feature space. Three features are suggested and further processing is obtained using a discretized three-dimensional distribution of the segments’ features represented as a 3-way data tensor. Further classification has been achieved using recently developed tensor decomposition/classification methods that reduce the size of the model and extract a significant and discriminative set of features. Main results. Effective parameterization of cEEG data has been achieved resulting in high classification accuracy (89%) to grade background EEG abnormalities. Significance. For the first time, the algorithm for the background EEG assessment has been validated on an extensive dataset which contained major artifacts and epileptic seizures. The demonstrated high robustness, while processing real-case EEGs, suggests that the algorithm can be used as an assistive tool to monitor the severity of hypoxic insults in newborns.
Local curvature analysis for classifying breast tumors: Preliminary analysis in dedicated breast CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Juhun, E-mail: leej15@upmc.edu; Nishikawa, Robert M.; Reiser, Ingrid
2015-09-15
Purpose: The purpose of this study is to measure the effectiveness of local curvature measures as novel image features for classifying breast tumors. Methods: A total of 119 breast lesions from 104 noncontrast dedicated breast computed tomography images of women were used in this study. Volumetric segmentation was done using a seed-based segmentation algorithm and then a triangulated surface was extracted from the resulting segmentation. Total, mean, and Gaussian curvatures were then computed. Normalized curvatures were used as classification features. In addition, traditional image features were also extracted and a forward feature selection scheme was used to select the optimalmore » feature set. Logistic regression was used as a classifier and leave-one-out cross-validation was utilized to evaluate the classification performances of the features. The area under the receiver operating characteristic curve (AUC, area under curve) was used as a figure of merit. Results: Among curvature measures, the normalized total curvature (C{sub T}) showed the best classification performance (AUC of 0.74), while the others showed no classification power individually. Five traditional image features (two shape, two margin, and one texture descriptors) were selected via the feature selection scheme and its resulting classifier achieved an AUC of 0.83. Among those five features, the radial gradient index (RGI), which is a margin descriptor, showed the best classification performance (AUC of 0.73). A classifier combining RGI and C{sub T} yielded an AUC of 0.81, which showed similar performance (i.e., no statistically significant difference) to the classifier with the above five traditional image features. Additional comparisons in AUC values between classifiers using different combinations of traditional image features and C{sub T} were conducted. The results showed that C{sub T} was able to replace the other four image features for the classification task. Conclusions: The normalized curvature measure contains useful information in classifying breast tumors. Using this, one can reduce the number of features in a classifier, which may result in more robust classifiers for different datasets.« less
Computer-aided classification of breast masses using contrast-enhanced digital mammograms
NASA Astrophysics Data System (ADS)
Danala, Gopichandh; Aghaei, Faranak; Heidari, Morteza; Wu, Teresa; Patel, Bhavika; Zheng, Bin
2018-02-01
By taking advantages of both mammography and breast MRI, contrast-enhanced digital mammography (CEDM) has emerged as a new promising imaging modality to improve efficacy of breast cancer screening and diagnosis. The primary objective of study is to develop and evaluate a new computer-aided detection and diagnosis (CAD) scheme of CEDM images to classify between malignant and benign breast masses. A CEDM dataset consisting of 111 patients (33 benign and 78 malignant) was retrospectively assembled. Each case includes two types of images namely, low-energy (LE) and dual-energy subtracted (DES) images. First, CAD scheme applied a hybrid segmentation method to automatically segment masses depicting on LE and DES images separately. Optimal segmentation results from DES images were also mapped to LE images and vice versa. Next, a set of 109 quantitative image features related to mass shape and density heterogeneity was initially computed. Last, four multilayer perceptron-based machine learning classifiers integrated with correlationbased feature subset evaluator and leave-one-case-out cross-validation method was built to classify mass regions depicting on LE and DES images, respectively. Initially, when CAD scheme was applied to original segmentation of DES and LE images, the areas under ROC curves were 0.7585+/-0.0526 and 0.7534+/-0.0470, respectively. After optimal segmentation mapping from DES to LE images, AUC value of CAD scheme significantly increased to 0.8477+/-0.0376 (p<0.01). Since DES images eliminate overlapping effect of dense breast tissue on lesions, segmentation accuracy was significantly improved as compared to regular mammograms, the study demonstrated that computer-aided classification of breast masses using CEDM images yielded higher performance.
NASA Astrophysics Data System (ADS)
B. Shokouhi, Shahriar; Fooladivanda, Aida; Ahmadinejad, Nasrin
2017-12-01
A computer-aided detection (CAD) system is introduced in this paper for detection of breast lesions in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The proposed CAD system firstly compensates motion artifacts and segments the breast region. Then, the potential lesion voxels are detected and used as the initial seed points for the seeded region-growing algorithm. A new and robust region-growing algorithm incorporating with Fuzzy C-means (FCM) clustering and vesselness filter is proposed to segment any potential lesion regions. Subsequently, the false positive detections are reduced by applying a discrimination step. This is based on 3D morphological characteristics of the potential lesion regions and kinetic features which are fed to the support vector machine (SVM) classifier. The performance of the proposed CAD system is evaluated using the free-response operating characteristic (FROC) curve. We introduce our collected dataset that includes 76 DCE-MRI studies, 63 malignant and 107 benign lesions. The prepared dataset has been used to verify the accuracy of the proposed CAD system. At 5.29 false positives per case, the CAD system accurately detects 94% of the breast lesions.
Khan, Shujaat; Naseem, Imran; Togneri, Roberto; Bennamoun, Mohammed
2018-01-01
In extreme cold weather, living organisms produce Antifreeze Proteins (AFPs) to counter the otherwise lethal intracellular formation of ice. Structures and sequences of various AFPs exhibit a high degree of heterogeneity, consequently the prediction of the AFPs is considered to be a challenging task. In this research, we propose to handle this arduous manifold learning task using the notion of localized processing. In particular, an AFP sequence is segmented into two sub-segments each of which is analyzed for amino acid and di-peptide compositions. We propose to use only the most significant features using the concept of information gain (IG) followed by a random forest classification approach. The proposed RAFP-Pred achieved an excellent performance on a number of standard datasets. We report a high Youden's index (sensitivity+specificity-1) value of 0.75 on the standard independent test data set outperforming the AFP-PseAAC, AFP_PSSM, AFP-Pred, and iAFP by a margin of 0.05, 0.06, 0.14, and 0.68, respectively. The verification rate on the UniProKB dataset is found to be 83.19 percent which is substantially superior to the 57.18 percent reported for the iAFP method.
Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel
2017-01-01
We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725
Automatic exudate detection by fusing multiple active contours and regionwise classification.
Harangi, Balazs; Hajdu, Andras
2014-11-01
In this paper, we propose a method for the automatic detection of exudates in digital fundus images. Our approach can be divided into three stages: candidate extraction, precise contour segmentation and the labeling of candidates as true or false exudates. For candidate detection, we borrow a grayscale morphology-based method to identify possible regions containing these bright lesions. Then, to extract the precise boundary of the candidates, we introduce a complex active contour-based method. Namely, to increase the accuracy of segmentation, we extract additional possible contours by taking advantage of the diverse behavior of different pre-processing methods. After selecting an appropriate combination of the extracted contours, a region-wise classifier is applied to remove the false exudate candidates. For this task, we consider several region-based features, and extract an appropriate feature subset to train a Naïve-Bayes classifier optimized further by an adaptive boosting technique. Regarding experimental studies, the method was tested on publicly available databases both to measure the accuracy of the segmentation of exudate regions and to recognize their presence at image-level. In a proper quantitative evaluation on publicly available datasets the proposed approach outperformed several state-of-the-art exudate detector algorithms. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reeves, Anthony P.; Xie, Yiting; Liu, Shuang
2017-01-01
Abstract. With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset. PMID:28612037
Association between mammogram density and background parenchymal enhancement of breast MRI
NASA Astrophysics Data System (ADS)
Aghaei, Faranak; Danala, Gopichandh; Wang, Yunzhi; Zarafshani, Ali; Qian, Wei; Liu, Hong; Zheng, Bin
2018-02-01
Breast density has been widely considered as an important risk factor for breast cancer. The purpose of this study is to examine the association between mammogram density results and background parenchymal enhancement (BPE) of breast MRI. A dataset involving breast MR images was acquired from 65 high-risk women. Based on mammography density (BIRADS) results, the dataset was divided into two groups of low and high breast density cases. The Low-Density group has 15 cases with mammographic density (BIRADS 1 and 2), while the High-density group includes 50 cases, which were rated by radiologists as mammographic density BIRADS 3 and 4. A computer-aided detection (CAD) scheme was applied to segment and register breast regions depicted on sequential images of breast MRI scans. CAD scheme computed 20 global BPE features from the entire two breast regions, separately from the left and right breast region, as well as from the bilateral difference between left and right breast regions. An image feature selection method namely, CFS method, was applied to remove the most redundant features and select optimal features from the initial feature pool. Then, a logistic regression classifier was built using the optimal features to predict the mammogram density from the BPE features. Using a leave-one-case-out validation method, the classifier yields the accuracy of 82% and area under ROC curve, AUC=0.81+/-0.09. Also, the box-plot based analysis shows a negative association between mammogram density results and BPE features in the MRI images. This study demonstrated a negative association between mammogram density and BPE of breast MRI images.
Fu, Min; Wu, Wenming; Hong, Xiafei; Liu, Qiuhua; Jiang, Jialin; Ou, Yaobin; Zhao, Yupei; Gong, Xinqi
2018-04-24
Efficient computational recognition and segmentation of target organ from medical images are foundational in diagnosis and treatment, especially about pancreas cancer. In practice, the diversity in appearance of pancreas and organs in abdomen, makes detailed texture information of objects important in segmentation algorithm. According to our observations, however, the structures of previous networks, such as the Richer Feature Convolutional Network (RCF), are too coarse to segment the object (pancreas) accurately, especially the edge. In this paper, we extend the RCF, proposed to the field of edge detection, for the challenging pancreas segmentation, and put forward a novel pancreas segmentation network. By employing multi-layer up-sampling structure replacing the simple up-sampling operation in all stages, the proposed network fully considers the multi-scale detailed contexture information of object (pancreas) to perform per-pixel segmentation. Additionally, using the CT scans, we supply and train our network, thus get an effective pipeline. Working with our pipeline with multi-layer up-sampling model, we achieve better performance than RCF in the task of single object (pancreas) segmentation. Besides, combining with multi scale input, we achieve the 76.36% DSC (Dice Similarity Coefficient) value in testing data. The results of our experiments show that our advanced model works better than previous networks in our dataset. On the other words, it has better ability in catching detailed contexture information. Therefore, our new single object segmentation model has practical meaning in computational automatic diagnosis.
NASA Astrophysics Data System (ADS)
Venkataraman, Sankar; Li, Wenjing
2008-03-01
Image analysis for automated diagnosis of cervical cancer has attained high prominence in the last decade. Automated image analysis at all levels requires a basic segmentation of the region of interest (ROI) within a given image. The precision of the diagnosis is often reflected by the precision in detecting the initial region of interest, especially when some features outside the ROI mimic the ones within the same. Work described here discusses algorithms that are used to improve the cervical region of interest as a part of automated cervical image diagnosis. A vital visual aid in diagnosing cervical cancer is the aceto-whitening of the cervix after the application of acetic acid. Color and texture are used to segment acetowhite regions within the cervical ROI. Vaginal walls along with cottonswabs sometimes mimic these essential features leading to several false positives. Work presented here is focused towards detecting in-focus vaginal wall boundaries and then extrapolating them to exclude vaginal walls from the cervical ROI. In addition, discussed here is a marker-controlled watershed segmentation that is used to detect cottonswabs from the cervical ROI. A dataset comprising 50 high resolution images of the cervix acquired after 60 seconds of acetic acid application were used to test the algorithm. Out of the 50 images, 27 benefited from a new cervical ROI. Significant improvement in overall diagnosis was observed in these images as false positives caused by features outside the actual ROI mimicking acetowhite region were eliminated.
Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets
Jeong, Won-Ki; Beyer, Johanna; Hadwiger, Markus; Vazquez, Amelio; Pfister, Hanspeter; Whitaker, Ross T.
2011-01-01
Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuroscientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. PMID:19834227
NASA Astrophysics Data System (ADS)
Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin
2018-03-01
Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p < 0.05) and odds ratio was 4.60 with a 95% confidence interval of [3.16, 6.70]. Study demonstrated that this new LPP-based feature regeneration approach enabled to produce an optimal feature vector and yield improved performance in assisting to predict risk of women having breast cancer detected in the next subsequent mammography screening.
WE-E-17A-06: Assessing the Scale of Tumor Heterogeneity by Complete Hierarchical Segmentation On MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gensheimer, M; Trister, A; Ermoian, R
2014-06-15
Purpose: In many cancers, intratumoral heterogeneity exists in vascular and genetic structure. We developed an algorithm which uses clinical imaging to interrogate different scales of heterogeneity. We hypothesize that heterogeneity of perfusion at large distance scales may correlate with propensity for disease recurrence. We applied the algorithm to initial diagnosis MRI of rhabdomyosarcoma patients to predict recurrence. Methods: The Spatial Heterogeneity Analysis by Recursive Partitioning (SHARP) algorithm recursively segments the tumor image. The tumor is repeatedly subdivided, with each dividing line chosen to maximize signal intensity difference between the two subregions. This process continues to the voxel level, producing segmentsmore » at multiple scales. Heterogeneity is measured by comparing signal intensity histograms between each segmented region and the adjacent region. We measured the scales of contrast enhancement heterogeneity of the primary tumor in 18 rhabdomyosarcoma patients. Using Cox proportional hazards regression, we explored the influence of heterogeneity parameters on relapse-free survival (RFS). To compare with existing methods, fractal and Haralick texture features were also calculated. Results: The complete segmentation produced by SHARP allows extraction of diverse features, including the amount of heterogeneity at various distance scales, the area of the tumor with the most heterogeneity at each scale, and for a given point in the tumor, the heterogeneity at different scales. 10/18 rhabdomyosarcoma patients suffered disease recurrence. On contrast-enhanced MRI, larger scale of maximum signal intensity heterogeneity, relative to tumor diameter, predicted for shorter RFS (p=0.05). Fractal dimension, fractal fit, and three Haralick features did not predict RFS (p=0.09-0.90). Conclusion: SHARP produces an automatic segmentation of tumor regions and reports the amount of heterogeneity at various distance scales. In rhabdomyosarcoma, RFS was shorter when the primary tumor exhibited larger scale of heterogeneity on contrast-enhanced MRI. If validated on a larger dataset, this imaging biomarker could be useful to help personalize treatment.« less
Method of generating features optimal to a dataset and classifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.
A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.
White blood cells identification system based on convolutional deep neural learning networks.
Shahin, A I; Guo, Yanhui; Amin, K M; Sharawi, Amr A
2017-11-16
White blood cells (WBCs) differential counting yields valued information about human health and disease. The current developed automated cell morphology equipments perform differential count which is based on blood smear image analysis. Previous identification systems for WBCs consist of successive dependent stages; pre-processing, segmentation, feature extraction, feature selection, and classification. There is a real need to employ deep learning methodologies so that the performance of previous WBCs identification systems can be increased. Classifying small limited datasets through deep learning systems is a major challenge and should be investigated. In this paper, we propose a novel identification system for WBCs based on deep convolutional neural networks. Two methodologies based on transfer learning are followed: transfer learning based on deep activation features and fine-tuning of existed deep networks. Deep acrivation featues are extracted from several pre-trained networks and employed in a traditional identification system. Moreover, a novel end-to-end convolutional deep architecture called "WBCsNet" is proposed and built from scratch. Finally, a limited balanced WBCs dataset classification is performed through the WBCsNet as a pre-trained network. During our experiments, three different public WBCs datasets (2551 images) have been used which contain 5 healthy WBCs types. The overall system accuracy achieved by the proposed WBCsNet is (96.1%) which is more than different transfer learning approaches or even the previous traditional identification system. We also present features visualization for the WBCsNet activation which reflects higher response than the pre-trained activated one. a novel WBCs identification system based on deep learning theory is proposed and a high performance WBCsNet can be employed as a pre-trained network. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra
2018-03-01
The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task.
Liu, Yu-Ying; Ishikawa, Hiroshi; Chen, Mei; Wollstein, Gadi; Duker, Jay S; Fujimoto, James G; Schuman, Joel S; Rehg, James M
2011-10-21
To develop an automated method to identify the normal macula and three macular pathologies (macular hole [MH], macular edema [ME], and age-related macular degeneration [AMD]) from the fovea-centered cross sections in three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) images. A sample of SD-OCT macular scans (macular cube 200 × 200 or 512 × 128 scan protocol; Cirrus HD-OCT; Carl Zeiss Meditec, Inc., Dublin, CA) was obtained from healthy subjects and subjects with MH, ME, and/or AMD (dataset for development: 326 scans from 136 subjects [193 eyes], and dataset for testing: 131 scans from 37 subjects [58 eyes]). A fovea-centered cross-sectional slice for each of the SD-OCT images was encoded using spatially distributed multiscale texture and shape features. Three ophthalmologists labeled each fovea-centered slice independently, and the majority opinion for each pathology was used as the ground truth. Machine learning algorithms were used to identify the discriminative features automatically. Two-class support vector machine classifiers were trained to identify the presence of normal macula and each of the three pathologies separately. The area under the receiver operating characteristic curve (AUC) was calculated to assess the performance. The cross-validation AUC result on the development dataset was 0.976, 0.931, 0939, and 0.938, and the AUC result on the holdout testing set was 0.978, 0.969, 0.941, and 0.975, for identifying normal macula, MH, ME, and AMD, respectively. The proposed automated data-driven method successfully identified various macular pathologies (all AUC > 0.94). This method may effectively identify the discriminative features without relying on a potentially error-prone segmentation module.
Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra
2018-03-01
The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Liu, Yu-Ying; Chen, Mei; Wollstein, Gadi; Duker, Jay S.; Fujimoto, James G.; Schuman, Joel S.; Rehg, James M.
2011-01-01
Purpose. To develop an automated method to identify the normal macula and three macular pathologies (macular hole [MH], macular edema [ME], and age-related macular degeneration [AMD]) from the fovea-centered cross sections in three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) images. Methods. A sample of SD-OCT macular scans (macular cube 200 × 200 or 512 × 128 scan protocol; Cirrus HD-OCT; Carl Zeiss Meditec, Inc., Dublin, CA) was obtained from healthy subjects and subjects with MH, ME, and/or AMD (dataset for development: 326 scans from 136 subjects [193 eyes], and dataset for testing: 131 scans from 37 subjects [58 eyes]). A fovea-centered cross-sectional slice for each of the SD-OCT images was encoded using spatially distributed multiscale texture and shape features. Three ophthalmologists labeled each fovea-centered slice independently, and the majority opinion for each pathology was used as the ground truth. Machine learning algorithms were used to identify the discriminative features automatically. Two-class support vector machine classifiers were trained to identify the presence of normal macula and each of the three pathologies separately. The area under the receiver operating characteristic curve (AUC) was calculated to assess the performance. Results. The cross-validation AUC result on the development dataset was 0.976, 0.931, 0939, and 0.938, and the AUC result on the holdout testing set was 0.978, 0.969, 0.941, and 0.975, for identifying normal macula, MH, ME, and AMD, respectively. Conclusions. The proposed automated data-driven method successfully identified various macular pathologies (all AUC > 0.94). This method may effectively identify the discriminative features without relying on a potentially error-prone segmentation module. PMID:21911579
NASA Astrophysics Data System (ADS)
Pezoa, Raquel; Salinas, Luis; Torres, Claudio; Härtel, Steffen; Maureira-Fredes, Cristián; Arce, Paola
2016-10-01
Breast cancer is one of the most common cancers in women worldwide. Patient therapy is widely supported by analysis of immunohistochemically (IHC) stained tissue sections. In particular, the analysis of HER2 overexpression by immunohistochemistry helps to determine when patients are suitable to HER2-targeted treatment. Computational HER2 overexpression analysis is still an open problem and a challenging task principally because of the variability of immunohistochemistry tissue samples and the subjectivity of the specialists to assess the samples. In addition, the immunohistochemistry process can produce diverse artifacts that difficult the HER2 overexpression assessment. In this paper we study the segmentation of HER2 overexpression in IHC stained breast cancer tissue images using a support vector machine (SVM) classifier. We asses the SVM performance using diverse color and texture pixel-level features including the RGB, CMYK, HSV, CIE L*a*b* color spaces, color deconvolution filter and Haralick features. We measure classification performance for three datasets containing a total of 153 IHC images that were previously labeled by a pathologist.
Convolutional networks for vehicle track segmentation
Quach, Tu-Thach
2017-08-19
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are unable to capture natural track features such as continuity and parallelism. More powerful, but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3-by-3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate inmore » low power and have limited training data. As a result, we aim for small, efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our 6-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.« less
Convolutional networks for vehicle track segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quach, Tu-Thach
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are unable to capture natural track features such as continuity and parallelism. More powerful, but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3-by-3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate inmore » low power and have limited training data. As a result, we aim for small, efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our 6-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.« less
Road Traffic Anomaly Detection via Collaborative Path Inference from GPS Snippets
Wang, Hongtao; Wen, Hui; Yi, Feng; Zhu, Hongsong; Sun, Limin
2017-01-01
Road traffic anomaly denotes a road segment that is anomalous in terms of traffic flow of vehicles. Detecting road traffic anomalies from GPS (Global Position System) snippets data is becoming critical in urban computing since they often suggest underlying events. However, the noisy and sparse nature of GPS snippets data have ushered multiple problems, which have prompted the detection of road traffic anomalies to be very challenging. To address these issues, we propose a two-stage solution which consists of two components: a Collaborative Path Inference (CPI) model and a Road Anomaly Test (RAT) model. CPI model performs path inference incorporating both static and dynamic features into a Conditional Random Field (CRF). Dynamic context features are learned collaboratively from large GPS snippets via a tensor decomposition technique. Then RAT calculates the anomalous degree for each road segment from the inferred fine-grained trajectories in given time intervals. We evaluated our method using a large scale real world dataset, which includes one-month GPS location data from more than eight thousand taxicabs in Beijing. The evaluation results show the advantages of our method beyond other baseline techniques. PMID:28282948
Gandomkar, Ziba; Brennan, Patrick C.; Mello-Thoms, Claudia
2017-01-01
Context: Previous studies showed that the agreement among pathologists in recognition of mitoses in breast slides is fairly modest. Aims: Determining the significantly different quantitative features among easily identifiable mitoses, challenging mitoses, and miscounted nonmitoses within breast slides and identifying which color spaces capture the difference among groups better than others. Materials and Methods: The dataset contained 453 mitoses and 265 miscounted objects in breast slides. The mitoses were grouped into three categories based on the confidence degree of three pathologists who annotated them. The mitoses annotated as “probably a mitosis” by the majority of pathologists were considered as the challenging category. The miscounted objects were recognized as a mitosis or probably a mitosis by only one of the pathologists. The mitoses were segmented using k-means clustering, followed by morphological operations. Morphological, intensity-based, and textural features were extracted from the segmented area and also the image patch of 63 × 63 pixels in different channels of eight color spaces. Holistic features describing the mitoses' surrounding cells of each image were also extracted. Statistical Analysis Used: The Kruskal–Wallis H-test followed by the Tukey-Kramer test was used to identify significantly different features. Results: The results indicated that challenging mitoses were smaller and rounder compared to other mitoses. Among different features, the Gabor textural features differed more than others between challenging mitoses and the easily identifiable ones. Sizes of the non-mitoses were similar to easily identifiable mitoses, but nonmitoses were rounder. The intensity-based features from chromatin channels were the most discriminative features between the easily identifiable mitoses and the miscounted objects. Conclusions: Quantitative features can be used to describe the characteristics of challenging mitoses and miscounted nonmitotic objects. PMID:28966834
Query-by-example surgical activity detection.
Gao, Yixin; Vedula, S Swaroop; Lee, Gyusung I; Lee, Mija R; Khudanpur, Sanjeev; Hager, Gregory D
2016-06-01
Easy acquisition of surgical data opens many opportunities to automate skill evaluation and teaching. Current technology to search tool motion data for surgical activity segments of interest is limited by the need for manual pre-processing, which can be prohibitive at scale. We developed a content-based information retrieval method, query-by-example (QBE), to automatically detect activity segments within surgical data recordings of long duration that match a query. The example segment of interest (query) and the surgical data recording (target trial) are time series of kinematics. Our approach includes an unsupervised feature learning module using a stacked denoising autoencoder (SDAE), two scoring modules based on asymmetric subsequence dynamic time warping (AS-DTW) and template matching, respectively, and a detection module. A distance matrix of the query against the trial is computed using the SDAE features, followed by AS-DTW combined with template scoring, to generate a ranked list of candidate subsequences (substrings). To evaluate the quality of the ranked list against the ground-truth, thresholding conventional DTW distances and bipartite matching are applied. We computed the recall, precision, F1-score, and a Jaccard index-based score on three experimental setups. We evaluated our QBE method using a suture throw maneuver as the query, on two tool motion datasets (JIGSAWS and MISTIC-SL) captured in a training laboratory. We observed a recall of 93, 90 and 87 % and a precision of 93, 91, and 88 % with same surgeon same trial (SSST), same surgeon different trial (SSDT) and different surgeon (DS) experiment setups on JIGSAWS, and a recall of 87, 81 and 75 % and a precision of 72, 61, and 53 % with SSST, SSDT and DS experiment setups on MISTIC-SL, respectively. We developed a novel, content-based information retrieval method to automatically detect multiple instances of an activity within long surgical recordings. Our method demonstrated adequate recall across different complexity datasets and experimental conditions.
Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas
2011-01-01
In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.
Dolz, Jose; Laprie, Anne; Ken, Soléakhéna; Leroy, Henri-Arthur; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien
2016-01-01
To constrain the risk of severe toxicity in radiotherapy and radiosurgery, precise volume delineation of organs at risk is required. This task is still manually performed, which is time-consuming and prone to observer variability. To address these issues, and as alternative to atlas-based segmentation methods, machine learning techniques, such as support vector machines (SVM), have been recently presented to segment subcortical structures on magnetic resonance images (MRI). SVM is proposed to segment the brainstem on MRI in multicenter brain cancer context. A dataset composed by 14 adult brain MRI scans is used to evaluate its performance. In addition to spatial and probabilistic information, five different image intensity values (IIVs) configurations are evaluated as features to train the SVM classifier. Segmentation accuracy is evaluated by computing the Dice similarity coefficient (DSC), absolute volumes difference (AVD) and percentage volume difference between automatic and manual contours. Mean DSC for all proposed IIVs configurations ranged from 0.89 to 0.90. Mean AVD values were below 1.5 cm(3), where the value for best performing IIVs configuration was 0.85 cm(3), representing an absolute mean difference of 3.99% with respect to the manual segmented volumes. Results suggest consistent volume estimation and high spatial similarity with respect to expert delineations. The proposed approach outperformed presented methods to segment the brainstem, not only in volume similarity metrics, but also in segmentation time. Preliminary results showed that the approach might be promising for adoption in clinical use.
Rashno, Abdolreza; Nazari, Behzad; Koozekanani, Dara D.; Drayna, Paul M.; Sadri, Saeed; Rabbani, Hossein
2017-01-01
A fully-automated method based on graph shortest path, graph cut and neutrosophic (NS) sets is presented for fluid segmentation in OCT volumes for exudative age related macular degeneration (EAMD) subjects. The proposed method includes three main steps: 1) The inner limiting membrane (ILM) and the retinal pigment epithelium (RPE) layers are segmented using proposed methods based on graph shortest path in NS domain. A flattened RPE boundary is calculated such that all three types of fluid regions, intra-retinal, sub-retinal and sub-RPE, are located above it. 2) Seed points for fluid (object) and tissue (background) are initialized for graph cut by the proposed automated method. 3) A new cost function is proposed in kernel space, and is minimized with max-flow/min-cut algorithms, leading to a binary segmentation. Important properties of the proposed steps are proven and quantitative performance of each step is analyzed separately. The proposed method is evaluated using a publicly available dataset referred as Optima and a local dataset from the UMN clinic. For fluid segmentation in 2D individual slices, the proposed method outperforms the previously proposed methods by 18%, 21% with respect to the dice coefficient and sensitivity, respectively, on the Optima dataset, and by 16%, 11% and 12% with respect to the dice coefficient, sensitivity and precision, respectively, on the local UMN dataset. Finally, for 3D fluid volume segmentation, the proposed method achieves true positive rate (TPR) and false positive rate (FPR) of 90% and 0.74%, respectively, with a correlation of 95% between automated and expert manual segmentations using linear regression analysis. PMID:29059257
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez Torres, E., E-mail: Ernesto.Lopez.Torres@cern.ch, E-mail: cerello@to.infn.it; Fiorina, E.; Pennazio, F.
Purpose: M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. Methods: M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number ofmore » features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. Results: The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. Conclusions: The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs.« less
NASA Astrophysics Data System (ADS)
Cairns, D.; Byrne, J. M.; Jiskoot, H.; McKenzie, J. M.; Johnson, D. L.
2013-12-01
Groundwater controls many aspects of water quantity and quality in mountain watersheds. Groundwater recharge and flow originating in mountain watersheds are often difficult to quantify due to challenges in the characterization of the local geology, as subsurface data are sparse and difficult to collect. Remote sensing data are more readily available and are beneficial for the characterization of watershed hydrodynamics. We present an automated geomorphometric model to identify the approximate spatial distribution of geomorphic features, and to segment each of these features based on relative hydrostratigraphic differences. A digital elevation model (DEM) dataset and predefined indices are used as inputs in a mountain watershed. The model uses periglacial, glacial, fluvial, slope evolution and lacustrine processes to identify regions that are subsequently delineated using morphometric principles. A 10 m cell size DEM from the headwaters of the St. Mary River watershed in Glacier National Park, Montana, was considered sufficient for this research. Morphometric parameters extracted from the DEM that were found to be useful for the calibration of the model were elevation, slope, flow direction, flow accumulation, and surface roughness. Algorithms were developed to utilize these parameters and delineate the distributions of bedrock outcrops, periglacial landscapes, alluvial channels, fans and outwash plains, glacial depositional features, talus slopes, and other mass wasted material. Theoretical differences in sedimentation and hydrofacies associated with each of the geomorphic features were used to segment the watershed into units reflecting similar hydrogeologic properties such as hydraulic conductivity and thickness. The results of the model were verified by comparing the distribution of geomorphic features with published geomorphic maps. Although agreement in semantics between datasets caused difficulties, a consensus yielded a comparison Dice Coefficient of 0.65. The results can be used to assist in groundwater model calibration, or to estimate spatial differences in near-surface groundwater behaviour. Verification of the geomorphometric model would be augmented by evaluating its success after use in the calibration of the groundwater simulation. These results may also be used directly in momentum-based equations to create a stochastic routing routine beneath the soil interface for a hydrometeorological model.
Ansari, A H; Cherian, P J; Dereymaeker, A; Matic, V; Jansen, K; De Wispelaere, L; Dielman, C; Vervisch, J; Swarte, R M; Govaert, P; Naulaers, G; De Vos, M; Van Huffel, S
2016-09-01
After identifying the most seizure-relevant characteristics by a previously developed heuristic classifier, a data-driven post-processor using a novel set of features is applied to improve the performance. The main characteristics of the outputs of the heuristic algorithm are extracted by five sets of features including synchronization, evolution, retention, segment, and signal features. Then, a support vector machine and a decision making layer remove the falsely detected segments. Four datasets including 71 neonates (1023h, 3493 seizures) recorded in two different university hospitals, are used to train and test the algorithm without removing the dubious seizures. The heuristic method resulted in a false alarm rate of 3.81 per hour and good detection rate of 88% on the entire test databases. The post-processor, effectively reduces the false alarm rate by 34% while the good detection rate decreases by 2%. This post-processing technique improves the performance of the heuristic algorithm. The structure of this post-processor is generic, improves our understanding of the core visually determined EEG features of neonatal seizures and is applicable for other neonatal seizure detectors. The post-processor significantly decreases the false alarm rate at the expense of a small reduction of the good detection rate. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Chen, Yinsheng; Li, Zeju; Wu, Guoqing; Yu, Jinhua; Wang, Yuanyuan; Lv, Xiaofei; Ju, Xue; Chen, Zhongping
2018-07-01
Due to the totally different therapeutic regimens needed for primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM), accurate differentiation of the two diseases by noninvasive imaging techniques is important for clinical decision-making. Thirty cases of PCNSL and 66 cases of GBM with conventional T1-contrast magnetic resonance imaging (MRI) were analyzed in this study. Convolutional neural networks was used to segment tumor automatically. A modified scale invariant feature transform (SIFT) method was utilized to extract three-dimensional local voxel arrangement information from segmented tumors. Fisher vector was proposed to normalize the dimension of SIFT features. An improved genetic algorithm (GA) was used to extract SIFT features with PCNSL and GBM discrimination ability. The data-set was divided into a cross-validation cohort and an independent validation cohort by the ratio of 2:1. Support vector machine with the leave-one-out cross-validation based on 20 cases of PCNSL and 44 cases of GBM was employed to build and validate the differentiation model. Among 16,384 high-throughput features, 1356 features show significant differences between PCNSL and GBM with p < 0.05 and 420 features with p < 0.001. A total of 496 features were finally chosen by improved GA algorithm. The proposed method produces PCNSL vs. GBM differentiation with an area under the curve (AUC) curve of 99.1% (98.2%), accuracy 95.3% (90.6%), sensitivity 85.0% (80.0%) and specificity 100% (95.5%) on the cross-validation cohort (and independent validation cohort). Since the local voxel arrangement characterization provided by SIFT features, proposed method produced more competitive PCNSL and GBM differentiation performance by using conventional MRI than methods based on advanced MRI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rios Velazquez, E; Parmar, C; Narayan, V
Purpose: To compare the complementary value of quantitative radiomic features to that of radiologist-annotated semantic features in predicting EGFR mutations in lung adenocarcinomas. Methods: Pre-operative CT images of 258 lung adenocarcinoma patients were available. Tumors were segmented using the sing-click ensemble segmentation algorithm. A set of radiomic features was extracted using 3D-Slicer. Test-retest reproducibility and unsupervised dimensionality reduction were applied to select a subset of reproducible and independent radiomic features. Twenty semantic annotations were scored by an expert radiologist, describing the tumor, surrounding tissue and associated findings. Minimum-redundancy-maximum-relevance (MRMR) was used to identify the most informative radiomic and semantic featuresmore » in 172 patients (training-set, temporal split). Radiomic, semantic and combined radiomic-semantic logistic regression models to predict EGFR mutations were evaluated in and independent validation dataset of 86 patients using the area under the receiver operating curve (AUC). Results: EGFR mutations were found in 77/172 (45%) and 39/86 (45%) of the training and validation sets, respectively. Univariate AUCs showed a similar range for both feature types: radiomics median AUC = 0.57 (range: 0.50 – 0.62); semantic median AUC = 0.53 (range: 0.50 – 0.64, Wilcoxon p = 0.55). After MRMR feature selection, the best-performing radiomic, semantic, and radiomic-semantic logistic regression models, for EGFR mutations, showed a validation AUC of 0.56 (p = 0.29), 0.63 (p = 0.063) and 0.67 (p = 0.004), respectively. Conclusion: Quantitative volumetric and textural Radiomic features complement the qualitative and semi-quantitative radiologist annotations. The prognostic value of informative qualitative semantic features such as cavitation and lobulation is increased with the addition of quantitative textural features from the tumor region.« less
The Stream-Catchment (StreamCat) and Lake-Catchment ...
Background/Question/MethodsLake and stream conditions respond to both natural and human-related landscape features. Characterizing these features within contributing areas (i.e., delineated watersheds) of streams and lakes could improve our understanding of how biological conditions vary spatially and improve the use, management, and restoration of these aquatic resources. However, the specialized geospatial techniques required to define and characterize stream and lake watersheds has limited their widespread use in both scientific and management efforts at large spatial scales. We developed the StreamCat and LakeCat Datasets to model, predict, and map the probable biological conditions of streams and lakes across the conterminous US (CONUS). Both StreamCat and LakeCat contain watershed-level characterizations of several hundred natural (e.g., soils, geology, climate, and land cover) and anthropogenic (e.g., urbanization, agriculture, mining, and forest management) landscape features for ca. 2.6 million stream segments and 376,000 lakes across the CONUS, respectively. These datasets can be paired with field samples to provide independent variables for modeling and other analyses. We paired 1,380 stream and 1,073 lake samples from the USEPAs National Aquatic Resource Surveys with StreamCat and LakeCat and used random forest (RF) to model and then map an invertebrate condition index and chlorophyll a concentration, respectively. Results/ConclusionsThe invertebrate
Statistical evaluation of manual segmentation of a diffuse low-grade glioma MRI dataset.
Ben Abdallah, Meriem; Blonski, Marie; Wantz-Mezieres, Sophie; Gaudeau, Yann; Taillandier, Luc; Moureaux, Jean-Marie
2016-08-01
Software-based manual segmentation is critical to the supervision of diffuse low-grade glioma patients and to the optimal treatment's choice. However, manual segmentation being time-consuming, it is difficult to include it in the clinical routine. An alternative to circumvent the time cost of manual segmentation could be to share the task among different practitioners, providing it can be reproduced. The goal of our work is to assess diffuse low-grade gliomas' manual segmentation's reproducibility on MRI scans, with regard to practitioners, their experience and field of expertise. A panel of 13 experts manually segmented 12 diffuse low-grade glioma clinical MRI datasets using the OSIRIX software. A statistical analysis gave promising results, as the practitioner factor, the medical specialty and the years of experience seem to have no significant impact on the average values of the tumor volume variable.
Tumor propagation model using generalized hidden Markov model
NASA Astrophysics Data System (ADS)
Park, Sun Young; Sargent, Dustin
2017-02-01
Tumor tracking and progression analysis using medical images is a crucial task for physicians to provide accurate and efficient treatment plans, and monitor treatment response. Tumor progression is tracked by manual measurement of tumor growth performed by radiologists. Several methods have been proposed to automate these measurements with segmentation, but many current algorithms are confounded by attached organs and vessels. To address this problem, we present a new generalized tumor propagation model considering time-series prior images and local anatomical features using a Hierarchical Hidden Markov model (HMM) for tumor tracking. First, we apply the multi-atlas segmentation technique to identify organs/sub-organs using pre-labeled atlases. Second, we apply a semi-automatic direct 3D segmentation method to label the initial boundary between the lesion and neighboring structures. Third, we detect vessels in the ROI surrounding the lesion. Finally, we apply the propagation model with the labeled organs and vessels to accurately segment and measure the target lesion. The algorithm has been designed in a general way to be applicable to various body parts and modalities. In this paper, we evaluate the proposed algorithm on lung and lung nodule segmentation and tracking. We report the algorithm's performance by comparing the longest diameter and nodule volumes using the FDA lung Phantom data and a clinical dataset.
Deep learning and texture-based semantic label fusion for brain tumor segmentation
NASA Astrophysics Data System (ADS)
Vidyaratne, L.; Alam, M.; Shboul, Z.; Iftekharuddin, K. M.
2018-02-01
Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.
Deep Learning and Texture-Based Semantic Label Fusion for Brain Tumor Segmentation.
Vidyaratne, L; Alam, M; Shboul, Z; Iftekharuddin, K M
2018-01-01
Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.
Squeeze-SegNet: a new fast deep convolutional neural network for semantic segmentation
NASA Astrophysics Data System (ADS)
Nanfack, Geraldin; Elhassouny, Azeddine; Oulad Haj Thami, Rachid
2018-04-01
The recent researches in Deep Convolutional Neural Network have focused their attention on improving accuracy that provide significant advances. However, if they were limited to classification tasks, nowadays with contributions from Scientific Communities who are embarking in this field, they have become very useful in higher level tasks such as object detection and pixel-wise semantic segmentation. Thus, brilliant ideas in the field of semantic segmentation with deep learning have completed the state of the art of accuracy, however this architectures become very difficult to apply in embedded systems as is the case for autonomous driving. We present a new Deep fully Convolutional Neural Network for pixel-wise semantic segmentation which we call Squeeze-SegNet. The architecture is based on Encoder-Decoder style. We use a SqueezeNet-like encoder and a decoder formed by our proposed squeeze-decoder module and upsample layer using downsample indices like in SegNet and we add a deconvolution layer to provide final multi-channel feature map. On datasets like Camvid or City-states, our net gets SegNet-level accuracy with less than 10 times fewer parameters than SegNet.
Lahiri, A; Roy, Abhijit Guha; Sheet, Debdoot; Biswas, Prabir Kumar
2016-08-01
Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member autoencoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708. Comparison with other major algorithms substantiates the high efficacy of our model.
Learning to merge: a new tool for interactive mapping
NASA Astrophysics Data System (ADS)
Porter, Reid B.; Lundquist, Sheng; Ruggiero, Christy
2013-05-01
The task of turning raw imagery into semantically meaningful maps and overlays is a key area of remote sensing activity. Image analysts, in applications ranging from environmental monitoring to intelligence, use imagery to generate and update maps of terrain, vegetation, road networks, buildings and other relevant features. Often these tasks can be cast as a pixel labeling problem, and several interactive pixel labeling tools have been developed. These tools exploit training data, which is generated by analysts using simple and intuitive paint-program annotation tools, in order to tailor the labeling algorithm for the particular dataset and task. In other cases, the task is best cast as a pixel segmentation problem. Interactive pixel segmentation tools have also been developed, but these tools typically do not learn from training data like the pixel labeling tools do. In this paper we investigate tools for interactive pixel segmentation that also learn from user input. The input has the form of segment merging (or grouping). Merging examples are 1) easily obtained from analysts using vector annotation tools, and 2) more challenging to exploit than traditional labels. We outline the key issues in developing these interactive merging tools, and describe their application to remote sensing.
An efficient global energy optimization approach for robust 3D plane segmentation of point clouds
NASA Astrophysics Data System (ADS)
Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian
2018-03-01
Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)
Automated unsupervised multi-parametric classification of adipose tissue depots in skeletal muscle
Valentinitsch, Alexander; Karampinos, Dimitrios C.; Alizai, Hamza; Subburaj, Karupppasamy; Kumar, Deepak; Link, Thomas M.; Majumdar, Sharmila
2012-01-01
Purpose To introduce and validate an automated unsupervised multi-parametric method for segmentation of the subcutaneous fat and muscle regions in order to determine subcutaneous adipose tissue (SAT) and intermuscular adipose tissue (IMAT) areas based on data from a quantitative chemical shift-based water-fat separation approach. Materials and Methods Unsupervised standard k-means clustering was employed to define sets of similar features (k = 2) within the whole multi-modal image after the water-fat separation. The automated image processing chain was composed of three primary stages including tissue, muscle and bone region segmentation. The algorithm was applied on calf and thigh datasets to compute SAT and IMAT areas and was compared to a manual segmentation. Results The IMAT area using the automatic segmentation had excellent agreement with the IMAT area using the manual segmentation for all the cases in the thigh (R2: 0.96) and for cases with up to moderate IMAT area in the calf (R2: 0.92). The group with the highest grade of muscle fat infiltration in the calf had the highest error in the inner SAT contour calculation. Conclusion The proposed multi-parametric segmentation approach combined with quantitative water-fat imaging provides an accurate and reliable method for an automated calculation of the SAT and IMAT areas reducing considerably the total post-processing time. PMID:23097409
MITK-based segmentation of co-registered MRI for subject-related regional anesthesia simulation
NASA Astrophysics Data System (ADS)
Teich, Christian; Liao, Wei; Ullrich, Sebastian; Kuhlen, Torsten; Ntouba, Alexandre; Rossaint, Rolf; Ullisch, Marcus; Deserno, Thomas M.
2008-03-01
With a steadily increasing indication, regional anesthesia is still trained directly on the patient. To develop a virtual reality (VR)-based simulation, a patient model is needed containing several tissues, which have to be extracted from individual magnet resonance imaging (MRI) volume datasets. Due to the given modality and the different characteristics of the single tissues, an adequate segmentation can only be achieved by using a combination of segmentation algorithms. In this paper, we present a framework for creating an individual model from MRI scans of the patient. Our work splits in two parts. At first, an easy-to-use and extensible tool for handling the segmentation task on arbitrary datasets is provided. The key idea is to let the user create a segmentation for the given subject by running different processing steps in a purposive order and store them in a segmentation script for reuse on new datasets. For data handling and visualization, we utilize the Medical Imaging Interaction Toolkit (MITK), which is based on the Visualization Toolkit (VTK) and the Insight Segmentation and Registration Toolkit (ITK). The second part is to find suitable segmentation algorithms and respectively parameters for differentiating the tissues required by the RA simulation. For this purpose, a fuzzy c-means clustering algorithm combined with mathematical morphology operators and a geometric active contour-based approach is chosen. The segmentation process itself aims at operating with minimal user interaction, and the gained model fits the requirements of the simulation. First results are shown for both, male and female MRI of the pelvis.
Segmentation of lung fields using Chan-Vese active contour model in chest radiographs
NASA Astrophysics Data System (ADS)
Sohn, Kiwon
2011-03-01
A CAD tool for chest radiographs consists of several procedures and the very first step is segmentation of lung fields. We develop a novel methodology for segmentation of lung fields in chest radiographs that can satisfy the following two requirements. First, we aim to develop a segmentation method that does not need a training stage with manual estimation of anatomical features in a large training dataset of images. Secondly, for the ease of implementation, it is desirable to apply a well established model that is widely used for various image-partitioning practices. The Chan-Vese active contour model, which is based on Mumford-Shah functional in the level set framework, is applied for segmentation of lung fields. With the use of this model, segmentation of lung fields can be carried out without detailed prior knowledge on the radiographic anatomy of the chest, yet in some chest radiographs, the trachea regions are unfavorably segmented out in addition to the lung field contours. To eliminate artifacts from the trachea, we locate the upper end of the trachea, find a vertical center line of the trachea and delineate it, and then brighten the trachea region to make it less distinctive. The segmentation process is finalized by subsequent morphological operations. We randomly select 30 images from the Japanese Society of Radiological Technology image database to test the proposed methodology and the results are shown. We hope our segmentation technique can help to promote of CAD tools, especially for emerging chest radiographic imaging techniques such as dual energy radiography and chest tomosynthesis.
Rashno, Abdolreza; Koozekanani, Dara D; Drayna, Paul M; Nazari, Behzad; Sadri, Saeed; Rabbani, Hossein; Parhi, Keshab K
2018-05-01
This paper presents a fully automated algorithm to segment fluid-associated (fluid-filled) and cyst regions in optical coherence tomography (OCT) retina images of subjects with diabetic macular edema. The OCT image is segmented using a novel neutrosophic transformation and a graph-based shortest path method. In neutrosophic domain, an image is transformed into three sets: (true), (indeterminate) that represents noise, and (false). This paper makes four key contributions. First, a new method is introduced to compute the indeterminacy set , and a new -correction operation is introduced to compute the set in neutrosophic domain. Second, a graph shortest-path method is applied in neutrosophic domain to segment the inner limiting membrane and the retinal pigment epithelium as regions of interest (ROI) and outer plexiform layer and inner segment myeloid as middle layers using a novel definition of the edge weights . Third, a new cost function for cluster-based fluid/cyst segmentation in ROI is presented which also includes a novel approach in estimating the number of clusters in an automated manner. Fourth, the final fluid regions are achieved by ignoring very small regions and the regions between middle layers. The proposed method is evaluated using two publicly available datasets: Duke, Optima, and a third local dataset from the UMN clinic which is available online. The proposed algorithm outperforms the previously proposed Duke algorithm by 8% with respect to the dice coefficient and by 5% with respect to precision on the Duke dataset, while achieving about the same sensitivity. Also, the proposed algorithm outperforms a prior method for Optima dataset by 6%, 22%, and 23% with respect to the dice coefficient, sensitivity, and precision, respectively. Finally, the proposed algorithm also achieves sensitivity of 67.3%, 88.8%, and 76.7%, for the Duke, Optima, and the university of minnesota (UMN) datasets, respectively.
Wilke, Marko
2018-02-01
This dataset contains the regression parameters derived by analyzing segmented brain MRI images (gray matter and white matter) from a large population of healthy subjects, using a multivariate adaptive regression splines approach. A total of 1919 MRI datasets ranging in age from 1-75 years from four publicly available datasets (NIH, C-MIND, fCONN, and IXI) were segmented using the CAT12 segmentation framework, writing out gray matter and white matter images normalized using an affine-only spatial normalization approach. These images were then subjected to a six-step DARTEL procedure, employing an iterative non-linear registration approach and yielding increasingly crisp intermediate images. The resulting six datasets per tissue class were then analyzed using multivariate adaptive regression splines, using the CerebroMatic toolbox. This approach allows for flexibly modelling smoothly varying trajectories while taking into account demographic (age, gender) as well as technical (field strength, data quality) predictors. The resulting regression parameters described here can be used to generate matched DARTEL or SHOOT templates for a given population under study, from infancy to old age. The dataset and the algorithm used to generate it are publicly available at https://irc.cchmc.org/software/cerebromatic.php.
NASA Astrophysics Data System (ADS)
Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-03-01
The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.
Geraghty, John P; Grogan, Garry; Ebert, Martin A
2013-04-30
This study investigates the variation in segmentation of several pelvic anatomical structures on computed tomography (CT) between multiple observers and a commercial automatic segmentation method, in the context of quality assurance and evaluation during a multicentre clinical trial. CT scans of two prostate cancer patients ('benchmarking cases'), one high risk (HR) and one intermediate risk (IR), were sent to multiple radiotherapy centres for segmentation of prostate, rectum and bladder structures according to the TROG 03.04 "RADAR" trial protocol definitions. The same structures were automatically segmented using iPlan software for the same two patients, allowing structures defined by automatic segmentation to be quantitatively compared with those defined by multiple observers. A sample of twenty trial patient datasets were also used to automatically generate anatomical structures for quantitative comparison with structures defined by individual observers for the same datasets. There was considerable agreement amongst all observers and automatic segmentation of the benchmarking cases for bladder (mean spatial variations < 0.4 cm across the majority of image slices). Although there was some variation in interpretation of the superior-inferior (cranio-caudal) extent of rectum, human-observer contours were typically within a mean 0.6 cm of automatically-defined contours. Prostate structures were more consistent for the HR case than the IR case with all human observers segmenting a prostate with considerably more volume (mean +113.3%) than that automatically segmented. Similar results were seen across the twenty sample datasets, with disagreement between iPlan and observers dominant at the prostatic apex and superior part of the rectum, which is consistent with observations made during quality assurance reviews during the trial. This study has demonstrated quantitative analysis for comparison of multi-observer segmentation studies. For automatic segmentation algorithms based on image-registration as in iPlan, it is apparent that agreement between observer and automatic segmentation will be a function of patient-specific image characteristics, particularly for anatomy with poor contrast definition. For this reason, it is suggested that automatic registration based on transformation of a single reference dataset adds a significant systematic bias to the resulting volumes and their use in the context of a multicentre trial should be carefully considered.
IntellEditS: intelligent learning-based editor of segmentations.
Harrison, Adam P; Birkbeck, Neil; Sofka, Michal
2013-01-01
Automatic segmentation techniques, despite demonstrating excellent overall accuracy, can often produce inaccuracies in local regions. As a result, correcting segmentations remains an important task that is often laborious, especially when done manually for 3D datasets. This work presents a powerful tool called Intelligent Learning-Based Editor of Segmentations (IntellEditS) that minimizes user effort and further improves segmentation accuracy. The tool partners interactive learning with an energy-minimization approach to editing. Based on interactive user input, a discriminative classifier is trained and applied to the edited 3D region to produce soft voxel labeling. The labels are integrated into a novel energy functional along with the existing segmentation and image data. Unlike the state of the art, IntellEditS is designed to correct segmentation results represented not only as masks but also as meshes. In addition, IntellEditS accepts intuitive boundary-based user interactions. The versatility and performance of IntellEditS are demonstrated on both MRI and CT datasets consisting of varied anatomical structures and resolutions.
Discriminative parameter estimation for random walks segmentation.
Baudin, Pierre-Yves; Goodman, Danny; Kumrnar, Puneet; Azzabou, Noura; Carlier, Pierre G; Paragios, Nikos; Kumar, M Pawan
2013-01-01
The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.
Abdullah, Kamarul A; McEntee, Mark F; Reed, Warren; Kench, Peter L
2018-04-30
An ideal organ-specific insert phantom should be able to simulate the anatomical features with appropriate appearances in the resultant computed tomography (CT) images. This study investigated a 3D printing technology to develop a novel and cost-effective cardiac insert phantom derived from volumetric CT image datasets of anthropomorphic chest phantom. Cardiac insert volumes were segmented from CT image datasets, derived from an anthropomorphic chest phantom of Lungman N-01 (Kyoto Kagaku, Japan). These segmented datasets were converted to a virtual 3D-isosurface of heart-shaped shell, while two other removable inserts were included using computer-aided design (CAD) software program. This newly designed cardiac insert phantom was later printed by using a fused deposition modelling (FDM) process via a Creatbot DM Plus 3D printer. Then, several selected filling materials, such as contrast media, oil, water and jelly, were loaded into designated spaces in the 3D-printed phantom. The 3D-printed cardiac insert phantom was positioned within the anthropomorphic chest phantom and 30 repeated CT acquisitions performed using a multi-detector scanner at 120-kVp tube potential. Attenuation (Hounsfield Unit, HU) values were measured and compared to the image datasets of real-patient and Catphan ® 500 phantom. The output of the 3D-printed cardiac insert phantom was a solid acrylic plastic material, which was strong, light in weight and cost-effective. HU values of the filling materials were comparable to the image datasets of real-patient and Catphan ® 500 phantom. A novel and cost-effective cardiac insert phantom for anthropomorphic chest phantom was developed using volumetric CT image datasets with a 3D printer. Hence, this suggested the printing methodology could be applied to generate other phantoms for CT imaging studies. © 2018 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.
Dermoscopic Image Segmentation via Multistage Fully Convolutional Networks.
Bi, Lei; Kim, Jinman; Ahn, Euijoon; Kumar, Ashnil; Fulham, Michael; Feng, Dagan
2017-09-01
Segmentation of skin lesions is an important step in the automated computer aided diagnosis of melanoma. However, existing segmentation methods have a tendency to over- or under-segment the lesions and perform poorly when the lesions have fuzzy boundaries, low contrast with the background, inhomogeneous textures, or contain artifacts. Furthermore, the performance of these methods are heavily reliant on the appropriate tuning of a large number of parameters as well as the use of effective preprocessing techniques, such as illumination correction and hair removal. We propose to leverage fully convolutional networks (FCNs) to automatically segment the skin lesions. FCNs are a neural network architecture that achieves object detection by hierarchically combining low-level appearance information with high-level semantic information. We address the issue of FCN producing coarse segmentation boundaries for challenging skin lesions (e.g., those with fuzzy boundaries and/or low difference in the textures between the foreground and the background) through a multistage segmentation approach in which multiple FCNs learn complementary visual characteristics of different skin lesions; early stage FCNs learn coarse appearance and localization information while late-stage FCNs learn the subtle characteristics of the lesion boundaries. We also introduce a new parallel integration method to combine the complementary information derived from individual segmentation stages to achieve a final segmentation result that has accurate localization and well-defined lesion boundaries, even for the most challenging skin lesions. We achieved an average Dice coefficient of 91.18% on the ISBI 2016 Skin Lesion Challenge dataset and 90.66% on the PH2 dataset. Our extensive experimental results on two well-established public benchmark datasets demonstrate that our method is more effective than other state-of-the-art methods for skin lesion segmentation.
Multitask assessment of roads and vehicles network (MARVN)
NASA Astrophysics Data System (ADS)
Yang, Fang; Yi, Meng; Cai, Yiran; Blasch, Erik; Sullivan, Nichole; Sheaff, Carolyn; Chen, Genshe; Ling, Haibin
2018-05-01
Vehicle detection in wide area motion imagery (WAMI) has drawn increasing attention from the computer vision research community in recent decades. In this paper, we present a new architecture for vehicle detection on road using multi-task network, which is able to detect and segment vehicles, estimate their pose, and meanwhile yield road isolation for a given region. The multi-task network consists of three components: 1) vehicle detection, 2) vehicle and road segmentation, and 3) detection screening. Segmentation and detection components share the same backbone network and are trained jointly in an end-to-end way. Unlike background subtraction or frame differencing based methods, the proposed Multitask Assessment of Roads and Vehicles Network (MARVN) method can detect vehicles which are slowing down, stopped, and/or partially occluded in a single image. In addition, the method can eliminate the detections which are located at outside road using yielded road segmentation so as to decrease the false positive rate. As few WAMI datasets have road mask and vehicles bounding box anotations, we extract 512 frames from WPAFB 2009 dataset and carefully refine the original annotations. The resulting dataset is thus named as WAMI512. We extensively compare the proposed method with state-of-the-art methods on WAMI512 dataset, and demonstrate superior performance in terms of efficiency and accuracy.
Detection of breast cancer in automated 3D breast ultrasound
NASA Astrophysics Data System (ADS)
Tan, Tao; Platel, Bram; Mus, Roel; Karssemeijer, Nico
2012-03-01
Automated 3D breast ultrasound (ABUS) is a novel imaging modality, in which motorized scans of the breasts are made with a wide transducer through a membrane under modest compression. The technology has gained high interest and may become widely used in screening of dense breasts, where sensitivity of mammography is poor. ABUS has a high sensitivity for detecting solid breast lesions. However, reading ABUS images is time consuming, and subtle abnormalities may be missed. Therefore, we are developing a computer aided detection (CAD) system to help reduce reading time and errors. In the multi-stage system we propose, segmentations of the breast and nipple are performed, providing landmarks for the detection algorithm. Subsequently, voxel features characterizing coronal spiculation patterns, blobness, contrast, and locations with respect to landmarks are extracted. Using an ensemble of classifiers, a likelihood map indicating potential malignancies is computed. Local maxima in the likelihood map are determined using a local maxima detector and form a set of candidate lesions in each view. These candidates are further processed in a second detection stage, which includes region segmentation, feature extraction and a final classification. Region segmentation is performed using a 3D spiral-scanning dynamic programming method. Region features include descriptors of shape, acoustic behavior and texture. Performance was determined using a 78-patient dataset with 93 images, including 50 malignant lesions. We used 10-fold cross-validation. Using FROC analysis we found that the system obtains a lesion sensitivity of 60% and 70% at 2 and 4 false positives per image respectively.
Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.
2017-01-01
Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324
Convalescing Cluster Configuration Using a Superlative Framework
Sabitha, R.; Karthik, S.
2015-01-01
Competent data mining methods are vital to discover knowledge from databases which are built as a result of enormous growth of data. Various techniques of data mining are applied to obtain knowledge from these databases. Data clustering is one such descriptive data mining technique which guides in partitioning data objects into disjoint segments. K-means algorithm is a versatile algorithm among the various approaches used in data clustering. The algorithm and its diverse adaptation methods suffer certain problems in their performance. To overcome these issues a superlative algorithm has been proposed in this paper to perform data clustering. The specific feature of the proposed algorithm is discretizing the dataset, thereby improving the accuracy of clustering, and also adopting the binary search initialization method to generate cluster centroids. The generated centroids are fed as input to K-means approach which iteratively segments the data objects into respective clusters. The clustered results are measured for accuracy and validity. Experiments conducted by testing the approach on datasets from the UC Irvine Machine Learning Repository evidently show that the accuracy and validity measure is higher than the other two approaches, namely, simple K-means and Binary Search method. Thus, the proposed approach proves that discretization process will improve the efficacy of descriptive data mining tasks. PMID:26543895
NASA Astrophysics Data System (ADS)
Masoud, Alaa; Koike, Katsuaki
2017-09-01
Detection and analysis of linear features related to surface and subsurface structures have been deemed necessary in natural resource exploration and earth surface instability assessment. Subjectivity in choosing control parameters required in conventional methods of lineament detection may cause unreliable results. To reduce this ambiguity, we developed LINDA (LINeament Detection and Analysis), an integrated tool with graphical user interface in Visual Basic. This tool automates processes of detection and analysis of linear features from grid data of topography (digital elevation model; DEM), gravity and magnetic surfaces, as well as data from remote sensing imagery. A simple interface with five display windows forms a user-friendly interactive environment. The interface facilitates grid data shading, detection and grouping of segments, lineament analyses for calculating strike and dip and estimating fault type, and interactive viewing of lineament geometry. Density maps of the center and intersection points of linear features (segments and lineaments) are also included. A systematic analysis of test DEMs and Landsat 7 ETM+ imagery datasets in the North and South Eastern Deserts of Egypt is implemented to demonstrate the capability of LINDA and correct use of its functions. Linear features from the DEM are superior to those from the imagery in terms of frequency, but both linear features agree with location and direction of V-shaped valleys and dykes and reference fault data. Through the case studies, LINDA applicability is demonstrated to highlight dominant structural trends, which can aid understanding of geodynamic frameworks in any region.
Johnson, Michaela R.; Buell, Gary R.; Kim, Moon H.; Nardi, Mark R.
2007-01-01
This dataset was developed as part of the National Water-Quality Assessment (NAWQA) Program, Nutrient Enrichment Effects Topical (NEET) study for five study units distributed across the United States: Apalachicola-Chattahoochee-Flint River Basin, Central Columbia Plateau-Yakima River Basin, Central Nebraska Basins, Potomac River Basin and Delmarva Peninsula, and White, Great and Little Miami River Basins. One hundred forty-three stream reaches were examined as part of the NEET study conducted 2003-04. Stream segments, with lengths equal to the logarithm of the basin area, were delineated upstream from the downstream ends of the stream reaches with the use of digital orthophoto quarter quadrangles (DOQQ) or selected from the high-resolution National Hydrography Dataset (NHD). Use of the NHD was necessary when the stream was not distinguishable in the DOQQ because of dense tree canopy. The analysis area for each stream segment was defined by a buffer beginning at the segment extending to 250 meters lateral to the stream segment. Delineation of land use/land cover (LULC) map units within stream segment buffers was conducted using on-screen digitizing of riparian LULC classes interpreted from the DOQQ. LULC units were mapped using a classification strategy consisting of nine classes. National Wetlands Inventory (NWI) data were used to aid in wetland classification. Longitudinal transect sampling lines offset from the stream segments were generated and partitioned into the underlying LULC types. These longitudinal samples yielded the relative linear extent and sequence of each LULC type within the riparian zone at the segment scale. The resulting areal and linear LULC data filled in the spatial-scale gap between the 30-meter resolution of the National Land Cover Dataset and the reach-level habitat assessment data collected onsite routinely for NAWQA ecological sampling. The final data consisted of 12 geospatial datasets: LULC within 25 meters of the stream reach (polygon); LULC within 50 meters of the stream reach (polygon); LULC within 50 meters of the stream segment (polygon); LULC within 100 meters of the stream segment (polygon); LULC within 150 meters of the stream segment (polygon); LULC within 250 meters of the stream segment (polygon); frequency of gaps in woody vegetation LULC at the reach scale (arc); stream reaches (arc); longitudinal LULC at the reach scale (arc); frequency of gaps in woody vegetation LULC at the segment scale (arc); stream segments (arc); and longitudinal LULC at the segment scale (arc).
Geomorphic floodplain with organic matter (biomass) estimates for Fanno Creek, Oregon
Sobieszczyk, Steven
2011-01-01
Fanno Creek is a tributary to the Tualatin River and flows though parts of the southwest Portland metropolitan area. The stream is heavily influenced by urban runoff and shows characteristic flashy streamflow and poor water quality commonly associated with urban streams. This data set represents the geomorphic floodplain as derived from light detection and ranging (LiDAR) data and aerial photographic imagery. The floodplain represents current conditions including both anthropogenic alterations and natural historic floodplain features. The floodplain dataset is divided into 13 reach segments and attributed with corresponding organic material load estimates for each reach.
Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L
2018-04-01
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
Fast and robust segmentation of the striatum using deep convolutional neural networks.
Choi, Hongyoon; Jin, Kyong Hwan
2016-12-01
Automated segmentation of brain structures is an important task in structural and functional image analysis. We developed a fast and accurate method for the striatum segmentation using deep convolutional neural networks (CNN). T1 magnetic resonance (MR) images were used for our CNN-based segmentation, which require neither image feature extraction nor nonlinear transformation. We employed two serial CNN, Global and Local CNN: The Global CNN determined approximate locations of the striatum. It performed a regression of input MR images fitted to smoothed segmentation maps of the striatum. From the output volume of Global CNN, cropped MR volumes which included the striatum were extracted. The cropped MR volumes and the output volumes of Global CNN were used for inputs of Local CNN. Local CNN predicted the accurate label of all voxels. Segmentation results were compared with a widely used segmentation method, FreeSurfer. Our method showed higher Dice Similarity Coefficient (DSC) (0.893±0.017 vs. 0.786±0.015) and precision score (0.905±0.018 vs. 0.690±0.022) than FreeSurfer-based striatum segmentation (p=0.06). Our approach was also tested using another independent dataset, which showed high DSC (0.826±0.038) comparable with that of FreeSurfer. Comparison with existing method Segmentation performance of our proposed method was comparable with that of FreeSurfer. The running time of our approach was approximately three seconds. We suggested a fast and accurate deep CNN-based segmentation for small brain structures which can be widely applied to brain image analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Sugiura, Toshihiko; Tanabe, Nobuhiro; Kusumoto, Masahiko; Eguchi, Kenji; Kaneko, Masahiro
2018-02-01
Chronic thromboembolic pulmonary hypertension (CTEPH) is characterized by obstruction of the pulmonary vasculature by residual organized thrombi. A morphological abnormality inside mediastinum of CTEPH patient is enlargement of pulmonary artery. This paper presents an automated assessment of aortic and main pulmonary arterial diameters for predicting CTEPH in low-dose CT lung screening. The distinctive feature of our method is to segment aorta and main pulmonary artery using both of prior probability and vascular direction which were estimated from mediastinal vascular region using principal curvatures of four-dimensional hyper surface. The method was applied to two datasets, 64 lowdose CT scans of lung cancer screening and 19 normal-dose CT scans of CTEPH patients through the training phase with 121 low-dose CT scans. This paper demonstrates effectiveness of our method for predicting CTEPH in low-dose CT screening.
Sensor-oriented feature usability evaluation in fingerprint segmentation
NASA Astrophysics Data System (ADS)
Li, Ying; Yin, Yilong; Yang, Gongping
2013-06-01
Existing fingerprint segmentation methods usually process fingerprint images captured by different sensors with the same feature or feature set. We propose to improve the fingerprint segmentation result in view of an important fact that images from different sensors have different characteristics for segmentation. Feature usability evaluation, which means to evaluate the usability of features to find the personalized feature or feature set for different sensors to improve the performance of segmentation. The need for feature usability evaluation for fingerprint segmentation is raised and analyzed as a new issue. To address this issue, we present a decision-tree-based feature-usability evaluation method, which utilizes a C4.5 decision tree algorithm to evaluate and pick the best suitable feature or feature set for fingerprint segmentation from a typical candidate feature set. We apply the novel method on the FVC2002 database of fingerprint images, which are acquired by four different respective sensors and technologies. Experimental results show that the accuracy of segmentation is improved, and time consumption for feature extraction is dramatically reduced with selected feature(s).
Wang, Jinke; Cheng, Yuanzhi; Guo, Changyong; Wang, Yadong; Tamura, Shinichi
2016-05-01
Propose a fully automatic 3D segmentation framework to segment liver on challenging cases that contain the low contrast of adjacent organs and the presence of pathologies from abdominal CT images. First, all of the atlases are weighted in the selected training datasets by calculating the similarities between the atlases and the test image to dynamically generate a subject-specific probabilistic atlas for the test image. The most likely liver region of the test image is further determined based on the generated atlas. A rough segmentation is obtained by a maximum a posteriori classification of probability map, and the final liver segmentation is produced by a shape-intensity prior level set in the most likely liver region. Our method is evaluated and demonstrated on 25 test CT datasets from our partner site, and its results are compared with two state-of-the-art liver segmentation methods. Moreover, our performance results on 10 MICCAI test datasets are submitted to the organizers for comparison with the other automatic algorithms. Using the 25 test CT datasets, average symmetric surface distance is [Formula: see text] mm (range 0.62-2.12 mm), root mean square symmetric surface distance error is [Formula: see text] mm (range 0.97-3.01 mm), and maximum symmetric surface distance error is [Formula: see text] mm (range 12.73-26.67 mm) by our method. Our method on 10 MICCAI test data sets ranks 10th in all the 47 automatic algorithms on the site as of July 2015. Quantitative results, as well as qualitative comparisons of segmentations, indicate that our method is a promising tool to improve the efficiency of both techniques. The applicability of the proposed method to some challenging clinical problems and the segmentation of the liver are demonstrated with good results on both quantitative and qualitative experimentations. This study suggests that the proposed framework can be good enough to replace the time-consuming and tedious slice-by-slice manual segmentation approach.
A novel Iterative algorithm to text segmentation for web born-digital images
NASA Astrophysics Data System (ADS)
Xu, Zhigang; Zhu, Yuesheng; Sun, Ziqiang; Liu, Zhen
2015-07-01
Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.
Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L
2018-01-01
The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.
NASA Astrophysics Data System (ADS)
Amit, S. N. K.; Saito, S.; Sasaki, S.; Kiyoki, Y.; Aoki, Y.
2015-04-01
Google earth with high-resolution imagery basically takes months to process new images before online updates. It is a time consuming and slow process especially for post-disaster application. The objective of this research is to develop a fast and effective method of updating maps by detecting local differences occurred over different time series; where only region with differences will be updated. In our system, aerial images from Massachusetts's road and building open datasets, Saitama district datasets are used as input images. Semantic segmentation is then applied to input images. Semantic segmentation is a pixel-wise classification of images by implementing deep neural network technique. Deep neural network technique is implemented due to being not only efficient in learning highly discriminative image features such as road, buildings etc., but also partially robust to incomplete and poorly registered target maps. Then, aerial images which contain semantic information are stored as database in 5D world map is set as ground truth images. This system is developed to visualise multimedia data in 5 dimensions; 3 dimensions as spatial dimensions, 1 dimension as temporal dimension, and 1 dimension as degenerated dimensions of semantic and colour combination dimension. Next, ground truth images chosen from database in 5D world map and a new aerial image with same spatial information but different time series are compared via difference extraction method. The map will only update where local changes had occurred. Hence, map updating will be cheaper, faster and more effective especially post-disaster application, by leaving unchanged region and only update changed region.
Vessel segmentation in 4D arterial spin labeling magnetic resonance angiography images of the brain
NASA Astrophysics Data System (ADS)
Phellan, Renzo; Lindner, Thomas; Falcão, Alexandre X.; Forkert, Nils D.
2017-03-01
4D arterial spin labeling magnetic resonance angiography (4D ASL MRA) is a non-invasive and safe modality for cerebrovascular imaging procedures. It uses the patient's magnetically labeled blood as intrinsic contrast agent, so that no external contrast media is required. It provides important 3D structure and blood flow information but a sufficient cerebrovascular segmentation is important since it can help clinicians to analyze and diagnose vascular diseases faster, and with higher confidence as compared to simple visual rating of raw ASL MRA images. This work presents a new method for automatic cerebrovascular segmentation in 4D ASL MRA images of the brain. In this process images are denoised, corresponding image label/control image pairs of the 4D ASL MRA sequences are subtracted, and temporal intensity averaging is used to generate a static representation of the vascular system. After that, sets of vessel and background seeds are extracted and provided as input for the image foresting transform algorithm to segment the vascular system. Four 4D ASL MRA datasets of the brain arteries of healthy subjects and corresponding time-of-flight (TOF) MRA images were available for this preliminary study. For evaluation of the segmentation results of the proposed method, the cerebrovascular system was automatically segmented in the high-resolution TOF MRA images using a validated algorithm and the segmentation results were registered to the 4D ASL datasets. Corresponding segmentation pairs were compared using the Dice similarity coefficient (DSC). On average, a DSC of 0.9025 was achieved, indicating that vessels can be extracted successfully from 4D ASL MRA datasets by the proposed segmentation method.
Automatic detection of blood vessels in retinal images for diabetic retinopathy diagnosis.
Raja, D Siva Sundhara; Vasuki, S
2015-01-01
Diabetic retinopathy (DR) is a leading cause of vision loss in diabetic patients. DR is mainly caused due to the damage of retinal blood vessels in the diabetic patients. It is essential to detect and segment the retinal blood vessels for DR detection and diagnosis, which prevents earlier vision loss in diabetic patients. The computer aided automatic detection and segmentation of blood vessels through the elimination of optic disc (OD) region in retina are proposed in this paper. The OD region is segmented using anisotropic diffusion filter and subsequentially the retinal blood vessels are detected using mathematical binary morphological operations. The proposed methodology is tested on two different publicly available datasets and achieved 93.99% sensitivity, 98.37% specificity, 98.08% accuracy in DRIVE dataset and 93.6% sensitivity, 98.96% specificity, and 95.94% accuracy in STARE dataset, respectively.
A shape prior-based MRF model for 3D masseter muscle segmentation
NASA Astrophysics Data System (ADS)
Majeed, Tahir; Fundana, Ketut; Lüthi, Marcel; Beinemann, Jörg; Cattin, Philippe
2012-02-01
Medical image segmentation is generally an ill-posed problem that can only be solved by incorporating prior knowledge. The ambiguities arise due to the presence of noise, weak edges, imaging artifacts, inhomogeneous interior and adjacent anatomical structures having similar intensity profile as the target structure. In this paper we propose a novel approach to segment the masseter muscle using the graph-cut incorporating additional 3D shape priors in CT datasets, which is robust to noise; artifacts; and shape deformations. The main contribution of this paper is in translating the 3D shape knowledge into both unary and pairwise potentials of the Markov Random Field (MRF). The segmentation task is casted as a Maximum-A-Posteriori (MAP) estimation of the MRF. Graph-cut is then used to obtain the global minimum which results in the segmentation of the masseter muscle. The method is tested on 21 CT datasets of the masseter muscle, which are noisy with almost all possessing mild to severe imaging artifacts such as high-density artifacts caused by e.g. the very common dental fillings and dental implants. We show that the proposed technique produces clinically acceptable results to the challenging problem of muscle segmentation, and further provide a quantitative and qualitative comparison with other methods. We statistically show that adding additional shape prior into both unary and pairwise potentials can increase the robustness of the proposed method in noisy datasets.
Sparse intervertebral fence composition for 3D cervical vertebra segmentation
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian
2018-06-01
Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.
NASA Technical Reports Server (NTRS)
Ganguly, Sangram; Basu, Saikat; Nemani, Ramakrishna R.; Mukhopadhyay, Supratik; Michaelis, Andrew; Votava, Petr
2016-01-01
High resolution tree cover classification maps are needed to increase the accuracy of current land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) tree cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agriculture Imagery Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with 60 million pixels) and has a total size of 65 terabytes for a single acquisition. Features extracted from the entire dataset would amount to 8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. Using the NASA Earth Exchange (NEX) initiative, we have developed an end-to-end architecture by integrating a segmentation module based on Statistical Region Merging, a classification algorithm using Deep Belief Network and a structured prediction algorithm using Conditional Random Fields to integrate the results from the segmentation and classification modules to create per-pixel class labels. The training process is scaled up using the power of GPUs and the prediction is scaled to quarter million NAIP tiles spanning the whole of Continental United States using the NEX HPC supercomputing cluster. An initial pilot over the state of California spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles has produced true positive rates of around 88 percent for fragmented forests and 74 percent for urban tree cover areas, with false positive rates lower than 2 percent for both landscapes.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Basu, S.; Nemani, R. R.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.
2016-12-01
High resolution tree cover classification maps are needed to increase the accuracy of current land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) tree cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agriculture Imagery Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with 60 million pixels) and has a total size of 65 terabytes for a single acquisition. Features extracted from the entire dataset would amount to 8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. Using the NASA Earth Exchange (NEX) initiative, we have developed an end-to-end architecture by integrating a segmentation module based on Statistical Region Merging, a classification algorithm using Deep Belief Network and a structured prediction algorithm using Conditional Random Fields to integrate the results from the segmentation and classification modules to create per-pixel class labels. The training process is scaled up using the power of GPUs and the prediction is scaled to quarter million NAIP tiles spanning the whole of Continental United States using the NEX HPC supercomputing cluster. An initial pilot over the state of California spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles has produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes.
Pre-operative segmentation of neck CT datasets for the planning of neck dissections
NASA Astrophysics Data System (ADS)
Cordes, Jeanette; Dornheim, Jana; Preim, Bernhard; Hertel, Ilka; Strauss, Gero
2006-03-01
For the pre-operative segmentation of CT neck datasets, we developed the software assistant NeckVision. The relevant anatomical structures for neck dissection planning can be segmented and the resulting patient-specific 3D-models are visualized afterwards in another software system for intervention planning. As a first step, we examined the appropriateness of elementary segmentation techniques based on gray values and contour information to extract the structures in the neck region from CT data. Region growing, interactive watershed transformation and live-wire are employed for segmentation of different target structures. It is also examined, which of the segmentation tasks can be automated. Based on this analysis, the software assistant NeckVision was developed to optimally support the workflow of image analysis for clinicians. The usability of NeckVision was tested within a first evaluation with four otorhinolaryngologists from the university hospital of Leipzig, four computer scientists from the university of Magdeburg and two laymen in both fields.
Automated Segmentability Index for Layer Segmentation of Macular SD-OCT Images.
Lee, Kyungmoo; Buitendijk, Gabriëlle H S; Bogunovic, Hrvoje; Springelkamp, Henriët; Hofman, Albert; Wahle, Andreas; Sonka, Milan; Vingerling, Johannes R; Klaver, Caroline C W; Abràmoff, Michael D
2016-03-01
To automatically identify which spectral-domain optical coherence tomography (SD-OCT) scans will provide reliable automated layer segmentations for more accurate layer thickness analyses in population studies. Six hundred ninety macular SD-OCT image volumes (6.0 × 6.0 × 2.3 mm 3 ) were obtained from one eyes of 690 subjects (74.6 ± 9.7 [mean ± SD] years, 37.8% of males) randomly selected from the population-based Rotterdam Study. The dataset consisted of 420 OCT volumes with successful automated retinal nerve fiber layer (RNFL) segmentations obtained from our previously reported graph-based segmentation method and 270 volumes with failed segmentations. To evaluate the reliability of the layer segmentations, we have developed a new metric, segmentability index SI, which is obtained from a random forest regressor based on 12 features using OCT voxel intensities, edge-based costs, and on-surface costs. The SI was compared with well-known quality indices, quality index (QI), and maximum tissue contrast index (mTCI), using receiver operating characteristic (ROC) analysis. The 95% confidence interval (CI) and the area under the curve (AUC) for the QI are 0.621 to 0.805 with AUC 0.713, for the mTCI 0.673 to 0.838 with AUC 0.756, and for the SI 0.784 to 0.920 with AUC 0.852. The SI AUC is significantly larger than either the QI or mTCI AUC ( P < 0.01). The segmentability index SI is well suited to identify SD-OCT scans for which successful automated intraretinal layer segmentations can be expected. Interpreting the quantification of SD-OCT images requires the underlying segmentation to be reliable, but standard SD-OCT quality metrics do not predict which segmentations are reliable and which are not. The segmentability index SI presented in this study does allow reliable segmentations to be identified, which is important for more accurate layer thickness analyses in research and population studies.
Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery.
Zhao, Yi; Ma, Jiale; Li, Xiaohui; Zhang, Jie
2018-02-27
An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset 'UAV_Fire'. A 15-layered self-learning DCNN architecture named 'Fire_Net' is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, 'Fire_Net' guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified.
Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery
Zhao, Yi; Ma, Jiale; Li, Xiaohui
2018-01-01
An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset ‘UAV_Fire’. A 15-layered self-learning DCNN architecture named ‘Fire_Net’ is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, ‘Fire_Net’ guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified. PMID:29495504
Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A
2015-07-01
Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Volumetric multimodality neural network for brain tumor segmentation
NASA Astrophysics Data System (ADS)
Silvana Castillo, Laura; Alexandra Daza, Laura; Carlos Rivera, Luis; Arbeláez, Pablo
2017-11-01
Brain lesion segmentation is one of the hardest tasks to be solved in computer vision with an emphasis on the medical field. We present a convolutional neural network that produces a semantic segmentation of brain tumors, capable of processing volumetric data along with information from multiple MRI modalities at the same time. This results in the ability to learn from small training datasets and highly imbalanced data. Our method is based on DeepMedic, the state of the art in brain lesion segmentation. We develop a new architecture with more convolutional layers, organized in three parallel pathways with different input resolution, and additional fully connected layers. We tested our method over the 2015 BraTS Challenge dataset, reaching an average dice coefficient of 84%, while the standard DeepMedic implementation reached 74%.
NASA Astrophysics Data System (ADS)
Leavens, Claudia; Vik, Torbjørn; Schulz, Heinrich; Allaire, Stéphane; Kim, John; Dawson, Laura; O'Sullivan, Brian; Breen, Stephen; Jaffray, David; Pekar, Vladimir
2008-03-01
Manual contouring of target volumes and organs at risk in radiation therapy is extremely time-consuming, in particular for treating the head-and-neck area, where a single patient treatment plan can take several hours to contour. As radiation treatment delivery moves towards adaptive treatment, the need for more efficient segmentation techniques will increase. We are developing a method for automatic model-based segmentation of the head and neck. This process can be broken down into three main steps: i) automatic landmark identification in the image dataset of interest, ii) automatic landmark-based initialization of deformable surface models to the patient image dataset, and iii) adaptation of the deformable models to the patient-specific anatomical boundaries of interest. In this paper, we focus on the validation of the first step of this method, quantifying the results of our automatic landmark identification method. We use an image atlas formed by applying thin-plate spline (TPS) interpolation to ten atlas datasets, using 27 manually identified landmarks in each atlas/training dataset. The principal variation modes returned by principal component analysis (PCA) of the landmark positions were used by an automatic registration algorithm, which sought the corresponding landmarks in the clinical dataset of interest using a controlled random search algorithm. Applying a run time of 60 seconds to the random search, a root mean square (rms) distance to the ground-truth landmark position of 9.5 +/- 0.6 mm was calculated for the identified landmarks. Automatic segmentation of the brain, mandible and brain stem, using the detected landmarks, is demonstrated.
National Hydrography Dataset Plus (NHDPlus)
The NHDPlus Version 1.0 is an integrated suite of application-ready geospatial data sets that incorporate many of the best features of the National Hydrography Dataset (NHD) and the National Elevation Dataset (NED). The NHDPlus includes a stream network (based on the 1:100,000-scale NHD), improved networking, naming, and value-added attributes (VAA's). NHDPlus also includes elevation-derived catchments (drainage areas) produced using a drainageenforcement technique first broadly applied in New England, and thus dubbed The New-England Method. This technique involves burning-in the 1:100,000-scale NHD and when available building walls using the national WatershedBoundary Dataset (WBD). The resulting modified digital elevation model(HydroDEM) is used to produce hydrologic derivatives that agree with the NHDand WBD. An interdisciplinary team from the U. S. Geological Survey (USGS), U.S. Environmental Protection Agency (USEPA), and contractors, over the lasttwo years has found this method to produce the best quality NHD catchments using an automated process.The VAAs include greatly enhanced capabilities for upstream and downstream navigation, analysis and modeling. Examples include: retrieve all flowlines (predominantly confluence-to-confluence stream segments) and catchments upstream of a given flowline using queries rather than by slower flowline-by flowline navigation; retrieve flowlines by stream order; subset a stream level path sorted in hydrologic order for st
SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation.
Xue, Yuan; Xu, Tao; Zhang, Han; Long, L Rodney; Huang, Xiaolei
2018-05-03
Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN's discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L 1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.
Automated segmentation of oral mucosa from wide-field OCT images (Conference Presentation)
NASA Astrophysics Data System (ADS)
Goldan, Ryan N.; Lee, Anthony M. D.; Cahill, Lucas; Liu, Kelly; MacAulay, Calum; Poh, Catherine F.; Lane, Pierre
2016-03-01
Optical Coherence Tomography (OCT) can discriminate morphological tissue features important for oral cancer detection such as the presence or absence of basement membrane and epithelial thickness. We previously reported an OCT system employing a rotary-pullback catheter capable of in vivo, rapid, wide-field (up to 90 x 2.5mm2) imaging in the oral cavity. Due to the size and complexity of these OCT data sets, rapid automated image processing software that immediately displays important tissue features is required to facilitate prompt bed-side clinical decisions. We present an automated segmentation algorithm capable of detecting the epithelial surface and basement membrane in 3D OCT images of the oral cavity. The algorithm was trained using volumetric OCT data acquired in vivo from a variety of tissue types and histology-confirmed pathologies spanning normal through cancer (8 sites, 21 patients). The algorithm was validated using a second dataset of similar size and tissue diversity. We demonstrate application of the algorithm to an entire OCT volume to map epithelial thickness, and detection of the basement membrane, over the tissue surface. These maps may be clinically useful for delineating pre-surgical tumor margins, or for biopsy site guidance.
Road Traffic Anomaly Detection via Collaborative Path Inference from GPS Snippets.
Wang, Hongtao; Wen, Hui; Yi, Feng; Zhu, Hongsong; Sun, Limin
2017-03-09
Road traffic anomaly denotes a road segment that is anomalous in terms of traffic flow of vehicles. Detecting road traffic anomalies from GPS (Global Position System) snippets data is becoming critical in urban computing since they often suggest underlying events. However, the noisy ands parse nature of GPS snippets data have ushered multiple problems, which have prompted the detection of road traffic anomalies to be very challenging. To address these issues, we propose a two-stage solution which consists of two components: a Collaborative Path Inference (CPI) model and a Road Anomaly Test (RAT) model. CPI model performs path inference incorporating both static and dynamic features into a Conditional Random Field (CRF). Dynamic context features are learned collaboratively from large GPS snippets via a tensor decomposition technique. Then RAT calculates the anomalous degree for each road segment from the inferred fine-grained trajectories in given time intervals. We evaluated our method using a large scale real world dataset, which includes one-month GPS location data from more than eight thousand taxi cabs in Beijing. The evaluation results show the advantages of our method beyond other baseline techniques.
Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network.
Li, Yuexiang; Shen, Linlin
2018-02-11
Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved.
Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung
2018-01-01
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets. PMID:29748495
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation
NASA Astrophysics Data System (ADS)
Tobon-Gomez, Catalina; Sukno, Federico M.; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F.
2012-07-01
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18% LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung
2018-05-10
The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.
Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F
2012-07-07
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
NASA Astrophysics Data System (ADS)
Macher, H.; Landes, T.; Grussenmeyer, P.
2016-06-01
Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.
Evaluation of catchment delineation methods for the medium-resolution National Hydrography Dataset
Johnston, Craig M.; Dewald, Thomas G.; Bondelid, Timothy R.; Worstell, Bruce B.; McKay, Lucinda D.; Rea, Alan; Moore, Richard B.; Goodall, Jonathan L.
2009-01-01
Different methods for determining catchments (incremental drainage areas) for stream segments of the medium-resolution (1:100,000-scale) National Hydrography Dataset (NHD) were evaluated by the U.S. Geological Survey (USGS), in cooperation with the U.S. Environmental Protection Agency (USEPA). The NHD is a comprehensive set of digital spatial data that contains information about surface-water features (such as lakes, ponds, streams, and rivers) of the United States. The need for NHD catchments was driven primarily by the goal to estimate NHD streamflow and velocity to support water-quality modeling. The application of catchments for this purpose also demonstrates the broader value of NHD catchments for supporting landscape characterization and analysis. Five catchment delineation methods were evaluated. Four of the methods use topographic information for the delineation of the NHD catchments. These methods include the Raster Seeding Method; two variants of a method first used in a USGS New England study-one used the Watershed Boundary Dataset (WBD) and the other did not-termed the 'New England Methods'; and the Outlet Matching Method. For these topographically based methods, the elevation data source was the 30-meter (m) resolution National Elevation Dataset (NED), as this was the highest resolution available for the conterminous United States and Hawaii. The fifth method evaluated, the Thiessen Polygon Method, uses distance to the nearest NHD stream segments to determine catchment boundaries. Catchments were generated using each method for NHD stream segments within six hydrologically and geographically distinct Subbasins to evaluate the applicability of the method across the United States. The five methods were evaluated by comparing the resulting catchments with the boundaries and the computed area measurements available from several verification datasets that were developed independently using manual methods. The results of the evaluation indicated that the two New England Methods provided the most accurate catchment boundaries. The New England Method with the WBD provided the most accurate results. The time and cost to implement and apply these automated methods were also considered in ultimately selecting the methods used to produce NHD catchments for the conterminous United States and Hawaii. This study was conducted by a joint USGS-USEPA team during the 2-year period that ended in September 2004. During the following 2-year period ending in the fall of 2006, the New England Methods were used to produce NHD catchments as part of a multiagency effort to generate the NHD streamflow and velocity estimates for a suite of integrated geospatial products known as 'NHDPlus.'
Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks.
Yu, Lequan; Chen, Hao; Dou, Qi; Qin, Jing; Heng, Pheng-Ann
2017-04-01
Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.
Rectified factor networks for biclustering of omics data.
Clevert, Djork-Arné; Unterthiner, Thomas; Povysil, Gundula; Hochreiter, Sepp
2017-07-15
Biclustering has become a major tool for analyzing large datasets given as matrix of samples times features and has been successfully applied in life sciences and e-commerce for drug design and recommender systems, respectively. actor nalysis for cluster cquisition (FABIA), one of the most successful biclustering methods, is a generative model that represents each bicluster by two sparse membership vectors: one for the samples and one for the features. However, FABIA is restricted to about 20 code units because of the high computational complexity of computing the posterior. Furthermore, code units are sometimes insufficiently decorrelated and sample membership is difficult to determine. We propose to use the recently introduced unsupervised Deep Learning approach Rectified Factor Networks (RFNs) to overcome the drawbacks of existing biclustering methods. RFNs efficiently construct very sparse, non-linear, high-dimensional representations of the input via their posterior means. RFN learning is a generalized alternating minimization algorithm based on the posterior regularization method which enforces non-negative and normalized posterior means. Each code unit represents a bicluster, where samples for which the code unit is active belong to the bicluster and features that have activating weights to the code unit belong to the bicluster. On 400 benchmark datasets and on three gene expression datasets with known clusters, RFN outperformed 13 other biclustering methods including FABIA. On data of the 1000 Genomes Project, RFN could identify DNA segments which indicate, that interbreeding with other hominins starting already before ancestors of modern humans left Africa. https://github.com/bioinf-jku/librfn. djork-arne.clevert@bayer.com or hochreit@bioinf.jku.at. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
3D segmentation of annulus fibrosus and nucleus pulposus from T2-weighted magnetic resonance images
NASA Astrophysics Data System (ADS)
Castro-Mateos, Isaac; Pozo, Jose M.; Eltes, Peter E.; Del Rio, Luis; Lazary, Aron; Frangi, Alejandro F.
2014-12-01
Computational medicine aims at employing personalised computational models in diagnosis and treatment planning. The use of such models to help physicians in finding the best treatment for low back pain (LBP) is becoming popular. One of the challenges of creating such models is to derive patient-specific anatomical and tissue models of the lumbar intervertebral discs (IVDs), as a prior step. This article presents a segmentation scheme that obtains accurate results irrespective of the degree of IVD degeneration, including pathological discs with protrusion or herniation. The segmentation algorithm, employing a novel feature selector, iteratively deforms an initial shape, which is projected into a statistical shape model space at first and then, into a B-Spline space to improve accuracy. The method was tested on a MR dataset of 59 patients suffering from LBP. The images follow a standard T2-weighted protocol in coronal and sagittal acquisitions. These two image volumes were fused in order to overcome large inter-slice spacing. The agreement between expert-delineated structures, used here as gold-standard, and our automatic segmentation was evaluated using Dice Similarity Index and surface-to-surface distances, obtaining a mean error of 0.68 mm in the annulus segmentation and 1.88 mm in the nucleus, which are the best results with respect to the image resolution in the current literature.
Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; McLaughlin, Robert A
2017-07-01
Automated left ventricular (LV) segmentation is crucial for efficient quantification of cardiac function and morphology to aid subsequent management of cardiac pathologies. In this paper, we parameterize the complete (all short axis slices and phases) LV segmentation task in terms of the radial distances between the LV centerpoint and the endo- and epicardial contours in polar space. We then utilize convolutional neural network regression to infer these parameters. Utilizing parameter regression, as opposed to conventional pixel classification, allows the network to inherently reflect domain-specific physical constraints. We have benchmarked our approach primarily against the publicly-available left ventricle segmentation challenge (LVSC) dataset, which consists of 100 training and 100 validation cardiac MRI cases representing a heterogeneous mix of cardiac pathologies and imaging parameters across multiple centers. Our approach attained a .77 Jaccard index, which is the highest published overall result in comparison to other automated algorithms. To test general applicability, we also evaluated against the Kaggle Second Annual Data Science Bowl, where the evaluation metric was the indirect clinical measures of LV volume rather than direct myocardial contours. Our approach attained a Continuous Ranked Probability Score (CRPS) of .0124, which would have ranked tenth in the original challenge. With this we demonstrate the effectiveness of convolutional neural network regression paired with domain-specific features in clinical segmentation. Copyright © 2017 Elsevier B.V. All rights reserved.
Segmentation of histological images and fibrosis identification with a convolutional neural network.
Fu, Xiaohang; Liu, Tong; Xiong, Zhaohan; Smaill, Bruce H; Stiles, Martin K; Zhao, Jichao
2018-07-01
Segmentation of histological images is one of the most crucial tasks for many biomedical analyses involving quantification of certain tissue types, such as fibrosis via Masson's trichrome staining. However, challenges are posed by the high variability and complexity of structural features in such images, in addition to imaging artifacts. Further, the conventional approach of manual thresholding is labor-intensive, and highly sensitive to inter- and intra-image intensity variations. An accurate and robust automated segmentation method is of high interest. We propose and evaluate an elegant convolutional neural network (CNN) designed for segmentation of histological images, particularly those with Masson's trichrome stain. The network comprises 11 successive convolutional - rectified linear unit - batch normalization layers. It outperformed state-of-the-art CNNs on a dataset of cardiac histological images (labeling fibrosis, myocytes, and background) with a Dice similarity coefficient of 0.947. With 100 times fewer (only 300,000) trainable parameters than the state-of-the-art, our CNN is less susceptible to overfitting, and is efficient. Additionally, it retains image resolution from input to output, captures fine-grained details, and can be trained end-to-end smoothly. To the best of our knowledge, this is the first deep CNN tailored to the problem of concern, and may potentially be extended to solve similar segmentation tasks to facilitate investigations into pathology and clinical treatment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Lu, Chao; Zheng, Yefeng; Birkbeck, Neil; Zhang, Jingdan; Kohlberger, Timo; Tietjen, Christian; Boettger, Thomas; Duncan, James S; Zhou, S Kevin
2012-01-01
In this paper, we present a novel method by incorporating information theory into the learning-based approach for automatic and accurate pelvic organ segmentation (including the prostate, bladder and rectum). We target 3D CT volumes that are generated using different scanning protocols (e.g., contrast and non-contrast, with and without implant in the prostate, various resolution and position), and the volumes come from largely diverse sources (e.g., diseased in different organs). Three key ingredients are combined to solve this challenging segmentation problem. First, marginal space learning (MSL) is applied to efficiently and effectively localize the multiple organs in the largely diverse CT volumes. Second, learning techniques, steerable features, are applied for robust boundary detection. This enables handling of highly heterogeneous texture pattern. Third, a novel information theoretic scheme is incorporated into the boundary inference process. The incorporation of the Jensen-Shannon divergence further drives the mesh to the best fit of the image, thus improves the segmentation performance. The proposed approach is tested on a challenging dataset containing 188 volumes from diverse sources. Our approach not only produces excellent segmentation accuracy, but also runs about eighty times faster than previous state-of-the-art solutions. The proposed method can be applied to CT images to provide visual guidance to physicians during the computer-aided diagnosis, treatment planning and image-guided radiotherapy to treat cancers in pelvic region.
NASA Astrophysics Data System (ADS)
Bell, L. R.; Dowling, J. A.; Pogson, E. M.; Metcalfe, P.; Holloway, L.
2017-01-01
Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.
NASA Astrophysics Data System (ADS)
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Qian, Wei; Zheng, Bin
2016-03-01
Current commercialized CAD schemes have high false-positive (FP) detection rates and also have high correlations in positive lesion detection with radiologists. Thus, we recently investigated a new approach to improve the efficacy of applying CAD to assist radiologists in reading and interpreting screening mammograms. Namely, we developed a new global feature based CAD approach/scheme that can cue the warning sign on the cases with high risk of being positive. In this study, we investigate the possibility of fusing global feature or case-based scores with the local or lesion-based CAD scores using an adaptive cueing method. We hypothesize that the information from the global feature extraction (features extracted from the whole breast regions) are different from and can provide supplementary information to the locally-extracted features (computed from the segmented lesion regions only). On a large and diverse full-field digital mammography (FFDM) testing dataset with 785 cases (347 negative and 438 cancer cases with masses only), we ran our lesion-based and case-based CAD schemes "as is" on the whole dataset. To assess the supplementary information provided by the global features, we used an adaptive cueing method to adaptively adjust the original CAD-generated detection scores (Sorg) of a detected suspicious mass region based on the computed case-based score (Scase) of the case associated with this detected region. Using the adaptive cueing method, better sensitivity results were obtained at lower FP rates (<= 1 FP per image). Namely, increases of sensitivities (in the FROC curves) of up to 6.7% and 8.2% were obtained for the ROI and Case-based results, respectively.
Heidari, Morteza; Khuzani, Abolfazl Zargari; Hollingsworth, Alan B; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin
2018-01-30
In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.
NASA Astrophysics Data System (ADS)
Heidari, Morteza; Zargari Khuzani, Abolfazl; Hollingsworth, Alan B.; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin
2018-02-01
In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.
PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting
Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693
PRESEE: an MDL/MML algorithm to time-series stream segmenting.
Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.
NASA Astrophysics Data System (ADS)
Liebel, L.; Körner, M.
2016-06-01
In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.
Localized-atlas-based segmentation of breast MRI in a decision-making framework.
Fooladivanda, Aida; Shokouhi, Shahriar B; Ahmadinejad, Nasrin
2017-03-01
Breast-region segmentation is an important step for density estimation and Computer-Aided Diagnosis (CAD) systems in Magnetic Resonance Imaging (MRI). Detection of breast-chest wall boundary is often a difficult task due to similarity between gray-level values of fibroglandular tissue and pectoral muscle. This paper proposes a robust breast-region segmentation method which is applicable for both complex cases with fibroglandular tissue connected to the pectoral muscle, and simple cases with high contrast boundaries. We present a decision-making framework based on geometric features and support vector machine (SVM) to classify breasts in two main groups, complex and simple. For complex cases, breast segmentation is done using a combination of intensity-based and atlas-based techniques; however, only intensity-based operation is employed for simple cases. A novel atlas-based method, that is called localized-atlas, accomplishes the processes of atlas construction and registration based on the region of interest (ROI). Atlas-based segmentation is performed by relying on the chest wall template. Our approach is validated using a dataset of 210 cases. Based on similarity between automatic and manual segmentation results, the proposed method achieves Dice similarity coefficient, Jaccard coefficient, total overlap, false negative, and false positive values of 96.3, 92.9, 97.4, 2.61 and 4.77%, respectively. The localization error of the breast-chest wall boundary is 1.97 mm, in terms of averaged deviation distance. The achieved results prove that the suggested framework performs the breast segmentation with negligible errors and efficient computational time for different breasts from the viewpoints of size, shape, and density pattern.
Hamoud Al-Tamimi, Mohammed Sabbih; Sulong, Ghazali; Shuaib, Ibrahim Lutfi
2015-07-01
Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor. Copyright © 2015 Elsevier Inc. All rights reserved.
ARCOCT: Automatic detection of lumen border in intravascular OCT images.
Cheimariotis, Grigorios-Aris; Chatzizisis, Yiannis S; Koutkias, Vassilis G; Toutouzas, Konstantinos; Giannopoulos, Andreas; Riga, Maria; Chouvarda, Ioanna; Antoniadis, Antonios P; Doulaverakis, Charalambos; Tsamboulatidis, Ioannis; Kompatsiaris, Ioannis; Giannoglou, George D; Maglaveras, Nicos
2017-11-01
Intravascular optical coherence tomography (OCT) is an invaluable tool for the detection of pathological features on the arterial wall and the investigation of post-stenting complications. Computational lumen border detection in OCT images is highly advantageous, since it may support rapid morphometric analysis. However, automatic detection is very challenging, since OCT images typically include various artifacts that impact image clarity, including features such as side branches and intraluminal blood presence. This paper presents ARCOCT, a segmentation method for fully-automatic detection of lumen border in OCT images. ARCOCT relies on multiple, consecutive processing steps, accounting for image preparation, contour extraction and refinement. In particular, for contour extraction ARCOCT employs the transformation of OCT images based on physical characteristics such as reflectivity and absorption of the tissue and, for contour refinement, local regression using weighted linear least squares and a 2nd degree polynomial model is employed to achieve artifact and small-branch correction as well as smoothness of the artery mesh. Our major focus was to achieve accurate contour delineation in the various types of OCT images, i.e., even in challenging cases with branches and artifacts. ARCOCT has been assessed in a dataset of 1812 images (308 from stented and 1504 from native segments) obtained from 20 patients. ARCOCT was compared against ground-truth manual segmentation performed by experts on the basis of various geometric features (e.g. area, perimeter, radius, diameter, centroid, etc.) and closed contour matching indicators (the Dice index, the Hausdorff distance and the undirected average distance), using standard statistical analysis methods. The proposed method was proven very efficient and close to the ground-truth, exhibiting non statistically-significant differences for most of the examined metrics. ARCOCT allows accurate and fully-automated lumen border detection in OCT images. Copyright © 2017 Elsevier B.V. All rights reserved.
Multiple Hypotheses Image Segmentation and Classification With Application to Dietary Assessment
Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.
2016-01-01
We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier’s confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback. PMID:25561457
Multiple hypotheses image segmentation and classification with application to dietary assessment.
Zhu, Fengqing; Bosch, Marc; Khanna, Nitin; Boushey, Carol J; Delp, Edward J
2015-01-01
We propose a method for dietary assessment to automatically identify and locate food in a variety of images captured during controlled and natural eating events. Two concepts are combined to achieve this: a set of segmented objects can be partitioned into perceptually similar object classes based on global and local features; and perceptually similar object classes can be used to assess the accuracy of image segmentation. These ideas are implemented by generating multiple segmentations of an image to select stable segmentations based on the classifier's confidence score assigned to each segmented image region. Automatic segmented regions are classified using a multichannel feature classification system. For each segmented region, multiple feature spaces are formed. Feature vectors in each of the feature spaces are individually classified. The final decision is obtained by combining class decisions from individual feature spaces using decision rules. We show improved accuracy of segmenting food images with classifier feedback.
Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura
2016-01-01
The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.
The Registration and Segmentation of Heterogeneous Laser Scanning Data
NASA Astrophysics Data System (ADS)
Al-Durgham, Mohannad M.
Light Detection And Ranging (LiDAR) mapping has been emerging over the past few years as a mainstream tool for the dense acquisition of three dimensional point data. Besides the conventional mapping missions, LiDAR systems have proven to be very useful for a wide spectrum of applications such as forestry, structural deformation analysis, urban mapping, and reverse engineering. The wide application scope of LiDAR lead to the development of many laser scanning technologies that are mountable on multiple platforms (i.e., airborne, mobile terrestrial, and tripod mounted), this caused variations in the characteristics and quality of the generated point clouds. As a result of the increased popularity and diversity of laser scanners, one should address the heterogeneous LiDAR data post processing (i.e., registration and segmentation) problems adequately. Current LiDAR integration techniques do not take into account the varying nature of laser scans originating from various platforms. In this dissertation, the author proposes a methodology designed particularly for the registration and segmentation of heterogeneous LiDAR data. A data characterization and filtering step is proposed to populate the points' attributes and remove non-planar LiDAR points. Then, a modified version of the Iterative Closest Point (ICP), denoted by the Iterative Closest Projected Point (ICPP) is designed for the registration of heterogeneous scans to remove any misalignments between overlapping strips. Next, a region-growing-based heterogeneous segmentation algorithm is developed to ensure the proper extraction of planar segments from the point clouds. Validation experiments show that the proposed heterogeneous registration can successfully align airborne and terrestrial datasets despite the great differences in their point density and their noise level. In addition, similar testes have been conducted to examine the heterogeneous segmentation and it is shown that one is able to identify common planar features in airborne and terrestrial data without resampling or manipulating the data in any way. The work presented in this dissertation provides a framework for the registration and segmentation of airborne and terrestrial laser scans which has a positive impact on the completeness of the scanned feature. Therefore, the derived products from these point clouds have higher accuracy as seen in the full manuscript.
Mexican sign language recognition using normalized moments and artificial neural networks
NASA Astrophysics Data System (ADS)
Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita
2014-09-01
This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.
Inferior vena cava segmentation with parameter propagation and graph cut.
Yan, Zixu; Chen, Feng; Wu, Fa; Kong, Dexing
2017-09-01
The inferior vena cava (IVC) is one of the vital veins inside the human body. Accurate segmentation of the IVC from contrast-enhanced CT images is of great importance. This extraction not only helps the physician understand its quantitative features such as blood flow and volume, but also it is helpful during the hepatic preoperative planning. However, manual delineation of the IVC is time-consuming and poorly reproducible. In this paper, we propose a novel method to segment the IVC with minimal user interaction. The proposed method performs the segmentation block by block between user-specified beginning and end masks. At each stage, the proposed method builds the segmentation model based on information from image regional appearances, image boundaries, and a prior shape. The intensity range and the prior shape for this segmentation model are estimated based on the segmentation result from the last block, or from user- specified beginning mask if at first stage. Then, the proposed method minimizes the energy function and generates the segmentation result for current block using graph cut. Finally, a backward tracking step from the end of the IVC is performed if necessary. We have tested our method on 20 clinical datasets and compared our method to three other vessel extraction approaches. The evaluation was performed using three quantitative metrics: the Dice coefficient (Dice), the mean symmetric distance (MSD), and the Hausdorff distance (MaxD). The proposed method has achieved a Dice of [Formula: see text], an MSD of [Formula: see text] mm, and a MaxD of [Formula: see text] mm, respectively, in our experiments. The proposed approach can achieve a sound performance with a relatively low computational cost and a minimal user interaction. The proposed algorithm has high potential to be applied for the clinical applications in the future.
A segmentation approach for a delineation of terrestrial ecoregions
NASA Astrophysics Data System (ADS)
Nowosad, J.; Stepinski, T.
2017-12-01
Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for global ecological and conservation studies.
NASA Astrophysics Data System (ADS)
Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.
2016-03-01
Remodeling of the myocardium is associated with increased risk of arrhythmia and heart failure. Our objective is to automatically identify regions of fibrotic myocardium, dense collagen, and adipose tissue, which can serve as a way to guide radiofrequency ablation therapy or endomyocardial biopsies. Using computer vision and machine learning, we present an automated algorithm to classify tissue compositions from cardiac optical coherence tomography (OCT) images. Three dimensional OCT volumes were obtained from 15 human hearts ex vivo within 48 hours of donor death (source, NDRI). We first segmented B-scans using a graph searching method. We estimated the boundary of each region by minimizing a cost function, which consisted of intensity, gradient, and contour smoothness. Then, features, including texture analysis, optical properties, and statistics of high moments, were extracted. We used a statistical model, relevance vector machine, and trained this model with abovementioned features to classify tissue compositions. To validate our method, we applied our algorithm to 77 volumes. The datasets for validation were manually segmented and classified by two investigators who were blind to our algorithm results and identified the tissues based on trichrome histology and pathology. The difference between automated and manual segmentation was 51.78 +/- 50.96 μm. Experiments showed that the attenuation coefficients of dense collagen were significantly different from other tissue types (P < 0.05, ANOVA). Importantly, myocardial fibrosis tissues were different from normal myocardium in entropy and kurtosis. The tissue types were classified with an accuracy of 84%. The results show good agreements with histology.
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
Validation of semi-automatic segmentation of the left atrium
NASA Astrophysics Data System (ADS)
Rettmann, M. E.; Holmes, D. R., III; Camp, J. J.; Packer, D. L.; Robb, R. A.
2008-03-01
Catheter ablation therapy has become increasingly popular for the treatment of left atrial fibrillation. The effect of this treatment on left atrial morphology, however, has not yet been completely quantified. Initial studies have indicated a decrease in left atrial size with a concomitant decrease in pulmonary vein diameter. In order to effectively study if catheter based therapies affect left atrial geometry, robust segmentations with minimal user interaction are required. In this work, we validate a method to semi-automatically segment the left atrium from computed-tomography scans. The first step of the technique utilizes seeded region growing to extract the entire blood pool including the four chambers of the heart, the pulmonary veins, aorta, superior vena cava, inferior vena cava, and other surrounding structures. Next, the left atrium and pulmonary veins are separated from the rest of the blood pool using an algorithm that searches for thin connections between user defined points in the volumetric data or on a surface rendering. Finally, pulmonary veins are separated from the left atrium using a three dimensional tracing tool. A single user segmented three datasets three times using both the semi-automatic technique as well as manual tracing. The user interaction time for the semi-automatic technique was approximately forty-five minutes per dataset and the manual tracing required between four and eight hours per dataset depending on the number of slices. A truth model was generated using a simple voting scheme on the repeated manual segmentations. A second user segmented each of the nine datasets using the semi-automatic technique only. Several metrics were computed to assess the agreement between the semi-automatic technique and the truth model including percent differences in left atrial volume, DICE overlap, and mean distance between the boundaries of the segmented left atria. Overall, the semi-automatic approach was demonstrated to be repeatable within and between raters, and accurate when compared to the truth model. Finally, we generated a visualization to assess the spatial variability in the segmentation errors between the semi-automatic approach and the truth model. The visualization demonstrates the highest errors occur at the boundaries between the left atium and pulmonary veins as well as the left atrium and left atrial appendage. In conclusion, we describe a semi-automatic approach for left atrial segmentation that demonstrates repeatability and accuracy, with the advantage of significant time reduction in user interaction time.
Carbone, V; Fluit, R; Pellikaan, P; van der Krogt, M M; Janssen, D; Damsgaard, M; Vigneron, L; Feilkas, T; Koopman, H F J M; Verdonschot, N
2015-03-18
When analyzing complex biomechanical problems such as predicting the effects of orthopedic surgery, subject-specific musculoskeletal models are essential to achieve reliable predictions. The aim of this paper is to present the Twente Lower Extremity Model 2.0, a new comprehensive dataset of the musculoskeletal geometry of the lower extremity, which is based on medical imaging data and dissection performed on the right lower extremity of a fresh male cadaver. Bone, muscle and subcutaneous fat (including skin) volumes were segmented from computed tomography and magnetic resonance images scans. Inertial parameters were estimated from the image-based segmented volumes. A complete cadaver dissection was performed, in which bony landmarks, attachments sites and lines-of-action of 55 muscle actuators and 12 ligaments, bony wrapping surfaces, and joint geometry were measured. The obtained musculoskeletal geometry dataset was finally implemented in the AnyBody Modeling System (AnyBody Technology A/S, Aalborg, Denmark), resulting in a model consisting of 12 segments, 11 joints and 21 degrees of freedom, and including 166 muscle-tendon elements for each leg. The new TLEM 2.0 dataset was purposely built to be easily combined with novel image-based scaling techniques, such as bone surface morphing, muscle volume registration and muscle-tendon path identification, in order to obtain subject-specific musculoskeletal models in a quick and accurate way. The complete dataset, including CT and MRI scans and segmented volume and surfaces, is made available at http://www.utwente.nl/ctw/bw/research/projects/TLEMsafe for the biomechanical community, in order to accelerate the development and adoption of subject-specific models on large scale. TLEM 2.0 is freely shared for non-commercial use only, under acceptance of the TLEMsafe Research License Agreement. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images.
Vázquez, David; Bernal, Jorge; Sánchez, F Javier; Fernández-Esparrach, Gloria; López, Antonio M; Romero, Adriana; Drozdzal, Michal; Courville, Aaron
2017-01-01
Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Adaptive region-growing with maximum curvature strategy for tumor segmentation in 18F-FDG PET
NASA Astrophysics Data System (ADS)
Tan, Shan; Li, Laquan; Choi, Wookjin; Kang, Min Kyu; D'Souza, Warren D.; Lu, Wei
2017-07-01
Accurate tumor segmentation in PET is crucial in many oncology applications. We developed an adaptive region-growing (ARG) algorithm with a maximum curvature strategy (ARG_MC) for tumor segmentation in PET. The ARG_MC repeatedly applied a confidence connected region-growing algorithm with increasing relaxing factor f. The optimal relaxing factor (ORF) was then determined at the transition point on the f-volume curve, where the volume just grew from the tumor into the surrounding normal tissues. The ARG_MC along with five widely used algorithms were tested on a phantom with 6 spheres at different signal to background ratios and on two clinic datasets including 20 patients with esophageal cancer and 11 patients with non-Hodgkin lymphoma (NHL). The ARG_MC did not require any phantom calibration or any a priori knowledge of the tumor or PET scanner. The identified ORF varied with tumor types (mean ORF = 9.61, 3.78 and 2.55 respectively for the phantom, esophageal cancer, and NHL datasets), and varied from one tumor to another. For the phantom, the ARG_MC ranked the second in segmentation accuracy with an average Dice similarity index (DSI) of 0.86, only slightly worse than Daisne’s adaptive thresholding method (DSI = 0.87), which required phantom calibration. For both the esophageal cancer dataset and the NHL dataset, the ARG_MC had the highest accuracy with an average DSI of 0.87 and 0.84, respectively. The ARG_MC was robust to parameter settings and region of interest selection, and it did not depend on scanners, imaging protocols, or tumor types. Furthermore, the ARG_MC made no assumption about the tumor size or tumor uptake distribution, making it suitable for segmenting tumors with heterogeneous FDG uptake. In conclusion, the ARG_MC was accurate, robust and easy to use, it provides a highly potential tool for PET tumor segmentation in clinic.
Adaptive Region-Growing with Maximum Curvature Strategy for Tumor Segmentation in 18F-FDG PET
Tan, Shan; Li, Laquan; Choi, Wookjin; Kang, Min Kyu; D’Souza, Warren D.; Lu, Wei
2017-01-01
Accurate tumor segmentation in PET is crucial in many oncology applications. We developed an adaptive region-growing (ARG) algorithm with a maximum curvature strategy (ARG_MC) for tumor segmentation in PET. The ARG_MC repeatedly applied a confidence connected region-growing (CCRG) algorithm with increasing relaxing factor f. The optimal relaxing factor (ORF) was then determined at the transition point on the f-volume curve, where the volume just grew from the tumor into the surrounding normal tissues. The ARG_MC along with five widely used algorithms were tested on a phantom with 6 spheres at different signal to background ratios and on two clinic datasets including 20 patients with esophageal cancer and 11 patients with non-Hodgkin lymphoma (NHL). The ARG_MC did not require any phantom calibration or any a priori knowledge of the tumor or PET scanner. The identified ORF varied with tumor types (mean ORF = 9.61, 3.78 and 2.55 respectively for the phantom, esophageal cancer, and NHL datasets), and varied from one tumor to another. For the phantom, the ARG_MC ranked the second in segmentation accuracy with an average Dice similarity index (DSI) of 0.86, only slightly worse than Daisne’s adaptive thresholding method (DSI=0.87), which required phantom calibration. For both the esophageal cancer dataset and the NHL dataset, the ARG_MC had the highest accuracy with an average DSI of 0.87 and 0.84, respectively. The ARG_MC was robust to parameter settings and region of interest selection, and it did not depend on scanners, imaging protocols, or tumor types. Furthermore, the ARG_MC made no assumption about the tumor size or tumor uptake distribution, making it suitable for segmenting tumors with heterogeneous FDG uptake. In conclusion, the ARG_MC was accurate, robust and easy to use, it provides a highly potential tool for PET tumor segmentation in clinic. PMID:28604372
Multiview boosting digital pathology analysis of prostate cancer.
Kwak, Jin Tae; Hewitt, Stephen M
2017-04-01
Various digital pathology tools have been developed to aid in analyzing tissues and improving cancer pathology. The multi-resolution nature of cancer pathology, however, has not been fully analyzed and utilized. Here, we develop an automated, cooperative, and multi-resolution method for improving prostate cancer diagnosis. Digitized tissue specimen images are obtained from 5 tissue microarrays (TMAs). The TMAs include 70 benign and 135 cancer samples (TMA1), 74 benign and 89 cancer samples (TMA2), 70 benign and 115 cancer samples (TMA3), 79 benign and 82 cancer samples (TMA4), and 72 benign and 86 cancer samples (TMA5). The tissue specimen images are segmented using intensity- and texture-based features. Using the segmentation results, a number of morphological features from lumens and epithelial nuclei are computed to characterize tissues at different resolutions. Applying a multiview boosting algorithm, tissue characteristics, obtained from differing resolutions, are cooperatively combined to achieve accurate cancer detection. In segmenting prostate tissues, the multiview boosting method achieved≥ 0.97 AUC using TMA1. For detecting cancers, the multiview boosting method achieved an AUC of 0.98 (95% CI: 0.97-0.99) as trained on TMA2 and tested on TMA3, TMA4, and TMA5. The proposed method was superior to single-view approaches, utilizing features from a single resolution or merging features from all the resolutions. Moreover, the performance of the proposed method was insensitive to the choice of the training dataset. Trained on TMA3, TMA4, and TMA5, the proposed method obtained an AUC of 0.97 (95% CI: 0.96-0.98), 0.98 (95% CI: 0.96-0.99), and 0.97 (95% CI: 0.96-0.98), respectively. The multiview boosting method is capable of integrating information from multiple resolutions in an effective and efficient fashion and identifying cancers with high accuracy. The multiview boosting method holds a great potential for improving digital pathology tools and research. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard
2016-04-01
Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of KARESZ project for whole Budapest. BSz contributed as an Alexander von Humboldt Research Fellow.
Accurate segmentation of lung fields on chest radiographs using deep convolutional networks
NASA Astrophysics Data System (ADS)
Arbabshirani, Mohammad R.; Dallal, Ahmed H.; Agarwal, Chirag; Patel, Aalpan; Moore, Gregory
2017-02-01
Accurate segmentation of lung fields on chest radiographs is the primary step for computer-aided detection of various conditions such as lung cancer and tuberculosis. The size, shape and texture of lung fields are key parameters for chest X-ray (CXR) based lung disease diagnosis in which the lung field segmentation is a significant primary step. Although many methods have been proposed for this problem, lung field segmentation remains as a challenge. In recent years, deep learning has shown state of the art performance in many visual tasks such as object detection, image classification and semantic image segmentation. In this study, we propose a deep convolutional neural network (CNN) framework for segmentation of lung fields. The algorithm was developed and tested on 167 clinical posterior-anterior (PA) CXR images collected retrospectively from picture archiving and communication system (PACS) of Geisinger Health System. The proposed multi-scale network is composed of five convolutional and two fully connected layers. The framework achieved IOU (intersection over union) of 0.96 on the testing dataset as compared to manual segmentation. The suggested framework outperforms state of the art registration-based segmentation by a significant margin. To our knowledge, this is the first deep learning based study of lung field segmentation on CXR images developed on a heterogeneous clinical dataset. The results suggest that convolutional neural networks could be employed reliably for lung field segmentation.
Sauwen, N; Acou, M; Van Cauter, S; Sima, D M; Veraart, J; Maes, F; Himmelreich, U; Achten, E; Van Huffel, S
2016-01-01
Tumor segmentation is a particularly challenging task in high-grade gliomas (HGGs), as they are among the most heterogeneous tumors in oncology. An accurate delineation of the lesion and its main subcomponents contributes to optimal treatment planning, prognosis and follow-up. Conventional MRI (cMRI) is the imaging modality of choice for manual segmentation, and is also considered in the vast majority of automated segmentation studies. Advanced MRI modalities such as perfusion-weighted imaging (PWI), diffusion-weighted imaging (DWI) and magnetic resonance spectroscopic imaging (MRSI) have already shown their added value in tumor tissue characterization, hence there have been recent suggestions of combining different MRI modalities into a multi-parametric MRI (MP-MRI) approach for brain tumor segmentation. In this paper, we compare the performance of several unsupervised classification methods for HGG segmentation based on MP-MRI data including cMRI, DWI, MRSI and PWI. Two independent MP-MRI datasets with a different acquisition protocol were available from different hospitals. We demonstrate that a hierarchical non-negative matrix factorization variant which was previously introduced for MP-MRI tumor segmentation gives the best performance in terms of mean Dice-scores for the pathologic tissue classes on both datasets.
Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies.
Koch, Lisa M; Rajchl, Martin; Bai, Wenjia; Baumgartner, Christian F; Tong, Tong; Passerat-Palmbach, Jonathan; Aljabar, Paul; Rueckert, Daniel
2017-08-22
Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation.
Chiang, Michael; Hallman, Sam; Cinquin, Amanda; de Mochel, Nabora Reyes; Paz, Adrian; Kawauchi, Shimako; Calof, Anne L; Cho, Ken W; Fowlkes, Charless C; Cinquin, Olivier
2015-11-25
Analysis of single cells in their native environment is a powerful method to address key questions in developmental systems biology. Confocal microscopy imaging of intact tissues, followed by automatic image segmentation, provides a means to conduct cytometric studies while at the same time preserving crucial information about the spatial organization of the tissue and morphological features of the cells. This technique is rapidly evolving but is still not in widespread use among research groups that do not specialize in technique development, perhaps in part for lack of tools that automate repetitive tasks while allowing experts to make the best use of their time in injecting their domain-specific knowledge. Here we focus on a well-established stem cell model system, the C. elegans gonad, as well as on two other model systems widely used to study cell fate specification and morphogenesis: the pre-implantation mouse embryo and the developing mouse olfactory epithelium. We report a pipeline that integrates machine-learning-based cell detection, fast human-in-the-loop curation of these detections, and running of active contours seeded from detections to segment cells. The procedure can be bootstrapped by a small number of manual detections, and outperforms alternative pieces of software we benchmarked on C. elegans gonad datasets. Using cell segmentations to quantify fluorescence contents, we report previously-uncharacterized cell behaviors in the model systems we used. We further show how cell morphological features can be used to identify cell cycle phase; this provides a basis for future tools that will streamline cell cycle experiments by minimizing the need for exogenous cell cycle phase labels. High-throughput 3D segmentation makes it possible to extract rich information from images that are routinely acquired by biologists, and provides insights - in particular with respect to the cell cycle - that would be difficult to derive otherwise.
Learning-based 3T brain MRI segmentation with guidance from 7T MRI labeling.
Deng, Minghui; Yu, Renping; Wang, Li; Shi, Feng; Yap, Pew-Thian; Shen, Dinggang
2016-12-01
Segmentation of brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is crucial for brain structural measurement and disease diagnosis. Learning-based segmentation methods depend largely on the availability of good training ground truth. However, the commonly used 3T MR images are of insufficient image quality and often exhibit poor intensity contrast between WM, GM, and CSF. Therefore, they are not ideal for providing good ground truth label data for training learning-based methods. Recent advances in ultrahigh field 7T imaging make it possible to acquire images with excellent intensity contrast and signal-to-noise ratio. In this paper, the authors propose an algorithm based on random forest for segmenting 3T MR images by training a series of classifiers based on reliable labels obtained semiautomatically from 7T MR images. The proposed algorithm iteratively refines the probability maps of WM, GM, and CSF via a cascade of random forest classifiers for improved tissue segmentation. The proposed method was validated on two datasets, i.e., 10 subjects collected at their institution and 797 3T MR images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Specifically, for the mean Dice ratio of all 10 subjects, the proposed method achieved 94.52% ± 0.9%, 89.49% ± 1.83%, and 79.97% ± 4.32% for WM, GM, and CSF, respectively, which are significantly better than the state-of-the-art methods (p-values < 0.021). For the ADNI dataset, the group difference comparisons indicate that the proposed algorithm outperforms state-of-the-art segmentation methods. The authors have developed and validated a novel fully automated method for 3T brain MR image segmentation. © 2016 American Association of Physicists in Medicine.
Learning-based 3T brain MRI segmentation with guidance from 7T MRI labeling.
Deng, Minghui; Yu, Renping; Wang, Li; Shi, Feng; Yap, Pew-Thian; Shen, Dinggang
2016-12-01
Segmentation of brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is crucial for brain structural measurement and disease diagnosis. Learning-based segmentation methods depend largely on the availability of good training ground truth. However, the commonly used 3T MR images are of insufficient image quality and often exhibit poor intensity contrast between WM, GM, and CSF. Therefore, they are not ideal for providing good ground truth label data for training learning-based methods. Recent advances in ultrahigh field 7T imaging make it possible to acquire images with excellent intensity contrast and signal-to-noise ratio. In this paper, the authors propose an algorithm based on random forest for segmenting 3T MR images by training a series of classifiers based on reliable labels obtained semiautomatically from 7T MR images. The proposed algorithm iteratively refines the probability maps of WM, GM, and CSF via a cascade of random forest classifiers for improved tissue segmentation. The proposed method was validated on two datasets, i.e., 10 subjects collected at their institution and 797 3T MR images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Specifically, for the mean Dice ratio of all 10 subjects, the proposed method achieved 94.52% ± 0.9%, 89.49% ± 1.83%, and 79.97% ± 4.32% for WM, GM, and CSF, respectively, which are significantly better than the state-of-the-art methods (p-values < 0.021). For the ADNI dataset, the group difference comparisons indicate that the proposed algorithm outperforms state-of-the-art segmentation methods. The authors have developed and validated a novel fully automated method for 3T brain MR image segmentation.
Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional networks
NASA Astrophysics Data System (ADS)
Roth, Holger; Oda, Masahiro; Shimizu, Natsuki; Oda, Hirohisa; Hayashi, Yuichiro; Kitasaka, Takayuki; Fujiwara, Michitaka; Misawa, Kazunari; Mori, Kensaku
2018-03-01
Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 +/- 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.
Bayesian automated cortical segmentation for neonatal MRI
NASA Astrophysics Data System (ADS)
Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha
2017-11-01
Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.
Hassan, Ahnaf Rashik; Bhuiyan, Mohammed Imamul Hassan
2016-09-15
Automatic sleep scoring is essential owing to the fact that conventionally a large volume of data have to be analyzed visually by the physicians which is onerous, time-consuming and error-prone. Therefore, there is a dire need of an automated sleep staging scheme. In this work, we decompose sleep-EEG signal segments using tunable-Q factor wavelet transform (TQWT). Various spectral features are then computed from TQWT sub-bands. The performance of spectral features in the TQWT domain has been determined by intuitive and graphical analyses, statistical validation, and Fisher criteria. Random forest is used to perform classification. Optimal choices and the effects of TQWT and random forest parameters have been determined and expounded. Experimental outcomes manifest the efficacy of our feature generation scheme in terms of p-values of ANOVA analysis and Fisher criteria. The proposed scheme yields 90.38%, 91.50%, 92.11%, 94.80%, 97.50% for 6-stage to 2-stage classification of sleep states on the benchmark Sleep-EDF data-set. In addition, its performance on DREAMS Subjects Data-set is also promising. The performance of the proposed method is significantly better than the existing ones in terms of accuracy and Cohen's kappa coefficient. Additionally, the proposed scheme gives high detection accuracy for sleep stages non-REM 1 and REM. Spectral features in the TQWT domain can discriminate sleep-EEG signals corresponding to various sleep states efficaciously. The proposed scheme will alleviate the burden of the physicians, speed-up sleep disorder diagnosis, and expedite sleep research. Copyright © 2016 Elsevier B.V. All rights reserved.
Wang, Hsin-Wei; Lin, Ya-Chi; Pai, Tun-Wen; Chang, Hao-Teng
2011-01-01
Epitopes are antigenic determinants that are useful because they induce B-cell antibody production and stimulate T-cell activation. Bioinformatics can enable rapid, efficient prediction of potential epitopes. Here, we designed a novel B-cell linear epitope prediction system called LEPS, Linear Epitope Prediction by Propensities and Support Vector Machine, that combined physico-chemical propensity identification and support vector machine (SVM) classification. We tested the LEPS on four datasets: AntiJen, HIV, a newly generated PC, and AHP, a combination of these three datasets. Peptides with globally or locally high physicochemical propensities were first identified as primitive linear epitope (LE) candidates. Then, candidates were classified with the SVM based on the unique features of amino acid segments. This reduced the number of predicted epitopes and enhanced the positive prediction value (PPV). Compared to four other well-known LE prediction systems, the LEPS achieved the highest accuracy (72.52%), specificity (84.22%), PPV (32.07%), and Matthews' correlation coefficient (10.36%).
Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation
NASA Astrophysics Data System (ADS)
Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin
2018-04-01
Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.
A real-time visual inspection method of fastening bolts in freight car operation
NASA Astrophysics Data System (ADS)
Nan, Guo; Yao, JunEn
2015-10-01
A real-time inspection of the key components is necessary for ensuring safe operation of freight car. While traditional inspection depends on the trained human inspectors, which is time-consuming and lower efficient. With the development of machine vision, vision-based inspection methods get more railway on-spot applications. The cross rod end fastening bolts are important components on both sides of the train body that fixing locking plates together with the freight car main structure. In our experiment, we get the images containing fastening bolt components, and accurately locate the locking plate position using a linear Support Vector Machine (SVM) locating model trained with Histograms of Oriented Gradients (HOG) features. Then we extract the straight line segment using the Line Segment Detector (LSD) and encoding them in a range, which constitute a straight line segment dataset. Lastly we determine the locking plate's working state by the linear pattern. The experiment result shows that the localization accurate rate is over 99%, the fault detection rate is over 95%, and the module implementation time is 2f/s. The overall performance can completely meet the practical railway safety assurance application.
Wen, Quan
2014-01-01
Membrane-bound macromolecules play an important role in tissue architecture and cell-cell communication, and is regulated by almost one-third of the genome. At the optical scale, one group of membrane proteins expresses themselves as linear structures along the cell surface boundaries, while others are sequestered; and this paper targets the former group. Segmentation of these membrane proteins on a cell-by-cell basis enables the quantitative assessment of localization for comparative analysis. However, such membrane proteins typically lack continuity, and their intensity distributions are often very heterogeneous; moreover, nuclei can form large clump, which further impedes the quantification of membrane signals on a cell-by-cell basis. To tackle these problems, we introduce a three-step process to (i) regularize the membrane signal through iterative tangential voting, (ii) constrain the location of surface proteins by nuclear features, where clumps of nuclei are segmented through a delaunay triangulation approach, and (iii) assign membrane-bound macromolecules to individual cells through an application of multi-phase geodesic level-set. We have validated our method using both synthetic data and a dataset of 200 images, and are able to demonstrate the efficacy of our approach with superior performance. PMID:25530633
Improved document image segmentation algorithm using multiresolution morphology
NASA Astrophysics Data System (ADS)
Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.
2011-01-01
Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.
Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A
2017-03-01
Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Wang, Hongzhi; Yushkevich, Paul A.
2013-01-01
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427
A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.
Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit
2017-07-01
Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.
Zheng, Yefeng; Barbu, Adrian; Georgescu, Bogdan; Scheuering, Michael; Comaniciu, Dorin
2008-11-01
We propose an automatic four-chamber heart segmentation system for the quantitative functional analysis of the heart from cardiac computed tomography (CT) volumes. Two topics are discussed: heart modeling and automatic model fitting to an unseen volume. Heart modeling is a nontrivial task since the heart is a complex nonrigid organ. The model must be anatomically accurate, allow manual editing, and provide sufficient information to guide automatic detection and segmentation. Unlike previous work, we explicitly represent important landmarks (such as the valves and the ventricular septum cusps) among the control points of the model. The control points can be detected reliably to guide the automatic model fitting process. Using this model, we develop an efficient and robust approach for automatic heart chamber segmentation in 3-D CT volumes. We formulate the segmentation as a two-step learning problem: anatomical structure localization and boundary delineation. In both steps, we exploit the recent advances in learning discriminative models. A novel algorithm, marginal space learning (MSL), is introduced to solve the 9-D similarity transformation search problem for localizing the heart chambers. After determining the pose of the heart chambers, we estimate the 3-D shape through learning-based boundary delineation. The proposed method has been extensively tested on the largest dataset (with 323 volumes from 137 patients) ever reported in the literature. To the best of our knowledge, our system is the fastest with a speed of 4.0 s per volume (on a dual-core 3.2-GHz processor) for the automatic segmentation of all four chambers.
Automatic segmentation of tumor-laden lung volumes from the LIDC database
NASA Astrophysics Data System (ADS)
O'Dell, Walter G.
2012-03-01
The segmentation of the lung parenchyma is often a critical pre-processing step prior to application of computer-aided detection of lung nodules. Segmentation of the lung volume can dramatically decrease computation time and reduce the number of false positive detections by excluding from consideration extra-pulmonary tissue. However, while many algorithms are capable of adequately segmenting the healthy lung, none have been demonstrated to work reliably well on tumor-laden lungs. Of particular challenge is to preserve tumorous masses attached to the chest wall, mediastinum or major vessels. In this role, lung volume segmentation comprises an important computational step that can adversely affect the performance of the overall CAD algorithm. An automated lung volume segmentation algorithm has been developed with the goals to maximally exclude extra-pulmonary tissue while retaining all true nodules. The algorithm comprises a series of tasks including intensity thresholding, 2-D and 3-D morphological operations, 2-D and 3-D floodfilling, and snake-based clipping of nodules attached to the chest wall. It features the ability to (1) exclude trachea and bowels, (2) snip large attached nodules using snakes, (3) snip small attached nodules using dilation, (4) preserve large masses fully internal to lung volume, (5) account for basal aspects of the lung where in a 2-D slice the lower sections appear to be disconnected from main lung, and (6) achieve separation of the right and left hemi-lungs. The algorithm was developed and trained to on the first 100 datasets of the LIDC image database.
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-09-01
Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.
Using deep learning to segment breast and fibroglandular tissue in MRI volumes.
Dalmış, Mehmet Ufuk; Litjens, Geert; Holland, Katharina; Setio, Arnaud; Mann, Ritse; Karssemeijer, Nico; Gubern-Mérida, Albert
2017-02-01
Automated segmentation of breast and fibroglandular tissue (FGT) is required for various computer-aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task. However, applicability of these methods is usually limited by the characteristics of the images used in the study datasets, while breast MRI varies with respect to the different MRI protocols used, in addition to the variability in breast shapes. All this variability, in addition to various MRI artifacts, makes it a challenging task to develop a robust breast and FGT segmentation method using traditional approaches. Therefore, in this study, we investigated the use of a deep-learning approach known as "U-net." We used a dataset of 66 breast MRI's randomly selected from our scientific archive, which includes five different MRI acquisition protocols and breasts from four breast density categories in a balanced distribution. To prepare reference segmentations, we manually segmented breast and FGT for all images using an in-house developed workstation. We experimented with the application of U-net in two different ways for breast and FGT segmentation. In the first method, following the same pipeline used in traditional approaches, we trained two consecutive (2C) U-nets: first for segmenting the breast in the whole MRI volume and the second for segmenting FGT inside the segmented breast. In the second method, we used a single 3-class (3C) U-net, which performs both tasks simultaneously by segmenting the volume into three regions: nonbreast, fat inside the breast, and FGT inside the breast. For comparison, we applied two existing and published methods to our dataset: an atlas-based method and a sheetness-based method. We used Dice Similarity Coefficient (DSC) to measure the performances of the automated methods, with respect to the manual segmentations. Additionally, we computed Pearson's correlation between the breast density values computed based on manual and automated segmentations. The average DSC values for breast segmentation were 0.933, 0.944, 0.863, and 0.848 obtained from 3C U-net, 2C U-nets, atlas-based method, and sheetness-based method, respectively. The average DSC values for FGT segmentation obtained from 3C U-net, 2C U-nets, and atlas-based methods were 0.850, 0.811, and 0.671, respectively. The correlation between breast density values based on 3C U-net and manual segmentations was 0.974. This value was significantly higher than 0.957 as obtained from 2C U-nets (P < 0.0001, Steiger's Z-test with Bonferoni correction) and 0.938 as obtained from atlas-based method (P = 0.0016). In conclusion, we applied a deep-learning method, U-net, for segmenting breast and FGT in MRI in a dataset that includes a variety of MRI protocols and breast densities. Our results showed that U-net-based methods significantly outperformed the existing algorithms and resulted in significantly more accurate breast density computation. © 2016 American Association of Physicists in Medicine.
Moving object localization using optical flow for pedestrian detection from a moving vehicle.
Hariyono, Joko; Hoang, Van-Dung; Jo, Kang-Hyun
2014-01-01
This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.
A boosted optimal linear learner for retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Poletti, E.; Grisan, E.
2014-03-01
Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.
Object Classification in Semi Structured Enviroment Using Forward-Looking Sonar
dos Santos, Matheus; Ribeiro, Pedro Otávio; Núñez, Pedro; Botelho, Silvia
2017-01-01
The submarine exploration using robots has been increasing in recent years. The automation of tasks such as monitoring, inspection, and underwater maintenance requires the understanding of the robot’s environment. The object recognition in the scene is becoming a critical issue for these systems. On this work, an underwater object classification pipeline applied in acoustic images acquired by Forward-Looking Sonar (FLS) are studied. The object segmentation combines thresholding, connected pixels searching and peak of intensity analyzing techniques. The object descriptor extract intensity and geometric features of the detected objects. A comparison between the Support Vector Machine, K-Nearest Neighbors, and Random Trees classifiers are presented. An open-source tool was developed to annotate and classify the objects and evaluate their classification performance. The proposed method efficiently segments and classifies the structures in the scene using a real dataset acquired by an underwater vehicle in a harbor area. Experimental results demonstrate the robustness and accuracy of the method described in this paper. PMID:28961163
NASA Astrophysics Data System (ADS)
Ahmad Fauzi, Mohammad Faizal; Gokozan, Hamza Numan; Elder, Brad; Puduvalli, Vinay K.; Otero, Jose J.; Gurcan, Metin N.
2014-03-01
Brain cancer surgery requires intraoperative consultation by neuropathology to guide surgical decisions regarding the extent to which the tumor undergoes gross total resection. In this context, the differential diagnosis between glioblastoma and metastatic cancer is challenging as the decision must be made during surgery in a short time-frame (typically 30 minutes). We propose a method to classify glioblastoma versus metastatic cancer based on extracting textural features from the non-nuclei region of cytologic preparations. For glioblastoma, these regions of interest are filled with glial processes between the nuclei, which appear as anisotropic thin linear structures. For metastasis, these regions correspond to a more homogeneous appearance, thus suitable texture features can be extracted from these regions to distinguish between the two tissue types. In our work, we use the Discrete Wavelet Frames to characterize the underlying texture due to its multi-resolution capability in modeling underlying texture. The textural characterization is carried out in primarily the non-nuclei regions after nuclei regions are segmented by adapting our visually meaningful decomposition segmentation algorithm to this problem. k-nearest neighbor method was then used to classify the features into glioblastoma or metastasis cancer class. Experiment on 53 images (29 glioblastomas and 24 metastases) resulted in average accuracy as high as 89.7% for glioblastoma, 87.5% for metastasis and 88.7% overall. Further studies are underway to incorporate nuclei region features into classification on an expanded dataset, as well as expanding the classification to more types of cancers.
Schneider, Matthias; Hirsch, Sven; Weber, Bruno; Székely, Gábor; Menze, Bjoern H
2015-01-01
We propose a novel framework for joint 3-D vessel segmentation and centerline extraction. The approach is based on multivariate Hough voting and oblique random forests (RFs) that we learn from noisy annotations. It relies on steerable filters for the efficient computation of local image features at different scales and orientations. We validate both the segmentation performance and the centerline accuracy of our approach both on synthetic vascular data and four 3-D imaging datasets of the rat visual cortex at 700 nm resolution. First, we evaluate the most important structural components of our approach: (1) Orthogonal subspace filtering in comparison to steerable filters that show, qualitatively, similarities to the eigenspace filters learned from local image patches. (2) Standard RF against oblique RF. Second, we compare the overall approach to different state-of-the-art methods for (1) vessel segmentation based on optimally oriented flux (OOF) and the eigenstructure of the Hessian, and (2) centerline extraction based on homotopic skeletonization and geodesic path tracing. Our experiments reveal the benefit of steerable over eigenspace filters as well as the advantage of oblique split directions over univariate orthogonal splits. We further show that the learning-based approach outperforms different state-of-the-art methods and proves highly accurate and robust with regard to both vessel segmentation and centerline extraction in spite of the high level of label noise in the training data. Copyright © 2014 Elsevier B.V. All rights reserved.
Assessing the scale of tumor heterogeneity by complete hierarchical segmentation of MRI.
Gensheimer, Michael F; Hawkins, Douglas S; Ermoian, Ralph P; Trister, Andrew D
2015-02-07
In many cancers, intratumoral heterogeneity has been found in histology, genetic variation and vascular structure. We developed an algorithm to interrogate different scales of heterogeneity using clinical imaging. We hypothesize that heterogeneity of perfusion at coarse scale may correlate with treatment resistance and propensity for disease recurrence. The algorithm recursively segments the tumor image into increasingly smaller regions. Each dividing line is chosen so as to maximize signal intensity difference between the two regions. This process continues until the tumor has been divided into single voxels, resulting in segments at multiple scales. For each scale, heterogeneity is measured by comparing each segmented region to the adjacent region and calculating the difference in signal intensity histograms. Using digital phantom images, we showed that the algorithm is robust to image artifacts and various tumor shapes. We then measured the primary tumor scales of contrast enhancement heterogeneity in MRI of 18 rhabdomyosarcoma patients. Using Cox proportional hazards regression, we explored the influence of heterogeneity parameters on relapse-free survival. Coarser scale of maximum signal intensity heterogeneity was prognostic of shorter survival (p = 0.05). By contrast, two fractal parameters and three Haralick texture features were not prognostic. In summary, our algorithm produces a biologically motivated segmentation of tumor regions and reports the amount of heterogeneity at various distance scales. If validated on a larger dataset, this prognostic imaging biomarker could be useful to identify patients at higher risk for recurrence and candidates for alternative treatment.
An open source multivariate framework for n-tissue segmentation with evaluation on public data.
Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C
2011-12-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.
An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data
Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.
2012-01-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993
Machine learning in a graph framework for subcortical segmentation
NASA Astrophysics Data System (ADS)
Guo, Zhihui; Kashyap, Satyananda; Sonka, Milan; Oguz, Ipek
2017-02-01
Automated and reliable segmentation of subcortical structures from human brain magnetic resonance images is of great importance for volumetric and shape analyses in quantitative neuroimaging studies. However, poor boundary contrast and variable shape of these structures make the automated segmentation a tough task. We propose a 3D graph-based machine learning method, called LOGISMOS-RF, to segment the caudate and the putamen from brain MRI scans in a robust and accurate way. An atlas-based tissue classification and bias-field correction method is applied to the images to generate an initial segmentation for each structure. Then a 3D graph framework is utilized to construct a geometric graph for each initial segmentation. A locally trained random forest classifier is used to assign a cost to each graph node. The max-flow algorithm is applied to solve the segmentation problem. Evaluation was performed on a dataset of T1-weighted MRI's of 62 subjects, with 42 images used for training and 20 images for testing. For comparison, FreeSurfer, FSL and BRAINSCut approaches were also evaluated using the same dataset. Dice overlap coefficients and surface-to-surfaces distances between the automated segmentation and expert manual segmentations indicate the results of our method are statistically significantly more accurate than the three other methods, for both the caudate (Dice: 0.89 +/- 0.03) and the putamen (0.89 +/- 0.03).
Jeyasingh, Suganthi; Veluchamy, Malathi
2017-05-01
Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License
BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana
2006-01-01
Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.
Ahmed, Shaheen; Iftekharuddin, Khan M; Vossough, Arastoo
2011-03-01
Our previous works suggest that fractal texture feature is useful to detect pediatric brain tumor in multimodal MRI. In this study, we systematically investigate efficacy of using several different image features such as intensity, fractal texture, and level-set shape in segmentation of posterior-fossa (PF) tumor for pediatric patients. We explore effectiveness of using four different feature selection and three different segmentation techniques, respectively, to discriminate tumor regions from normal tissue in multimodal brain MRI. We further study the selective fusion of these features for improved PF tumor segmentation. Our result suggests that Kullback-Leibler divergence measure for feature ranking and selection and the expectation maximization algorithm for feature fusion and tumor segmentation offer the best results for the patient data in this study. We show that for T1 and fluid attenuation inversion recovery (FLAIR) MRI modalities, the best PF tumor segmentation is obtained using the texture feature such as multifractional Brownian motion (mBm) while that for T2 MRI is obtained by fusing level-set shape with intensity features. In multimodality fused MRI (T1, T2, and FLAIR), mBm feature offers the best PF tumor segmentation performance. We use different similarity metrics to evaluate quality and robustness of these selected features for PF tumor segmentation in MRI for ten pediatric patients.
Hybrid registration of PET/CT in thoracic region with pre-filtering PET sinogram
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Marhaban, M. H.; Nordin, A. J.; Hashim, S.
2015-11-01
The integration of physiological (PET) and anatomical (CT) images in cancer delineation requires an accurate spatial registration technique. Although hybrid PET/CT scanner is used to co-register these images, significant misregistrations exist due to patient and respiratory/cardiac motions. This paper proposes a hybrid feature-intensity based registration technique for hybrid PET/CT scanner. First, simulated PET sinogram was filtered with a 3D hybrid mean-median before reconstructing the image. The features were then derived from the segmented structures (lung, heart and tumor) from both images. The registration was performed based on modified multi-modality demon registration with multiresolution scheme. Apart from visual observations improvements, the proposed registration technique increased the normalized mutual information index (NMI) between the PET/CT images after registration. All nine tested datasets show marked improvements in mutual information (MI) index than free form deformation (FFD) registration technique with the highest MI increase is 25%.
Multiclass feature selection for improved pediatric brain tumor segmentation
NASA Astrophysics Data System (ADS)
Ahmed, Shaheen; Iftekharuddin, Khan M.
2012-03-01
In our previous work, we showed that fractal-based texture features are effective in detection, segmentation and classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. We exploited an information theoretic approach such as Kullback-Leibler Divergence (KLD) for feature selection and ranking different texture features. We further incorporated the feature selection technique with segmentation method such as Expectation Maximization (EM) for segmentation of tumor T and non tumor (NT) tissues. In this work, we extend the two class KLD technique to multiclass for effectively selecting the best features for brain tumor (T), cyst (C) and non tumor (NT). We further obtain segmentation robustness for each tissue types by computing Bay's posterior probabilities and corresponding number of pixels for each tissue segments in MRI patient images. We evaluate improved tumor segmentation robustness using different similarity metric for 5 patients in T1, T2 and FLAIR modalities.
Foundation and methodologies in computer-aided diagnosis systems for breast cancer detection.
Jalalian, Afsaneh; Mashohor, Syamsiah; Mahmud, Rozi; Karasfi, Babak; Saripan, M Iqbal B; Ramli, Abdul Rahman B
2017-01-01
Breast cancer is the most prevalent cancer that affects women all over the world. Early detection and treatment of breast cancer could decline the mortality rate. Some issues such as technical reasons, which related to imaging quality and human error, increase misdiagnosis of breast cancer by radiologists. Computer-aided detection systems (CADs) are developed to overcome these restrictions and have been studied in many imaging modalities for breast cancer detection in recent years. The CAD systems improve radiologists' performance in finding and discriminating between the normal and abnormal tissues. These procedures are performed only as a double reader but the absolute decisions are still made by the radiologist. In this study, the recent CAD systems for breast cancer detection on different modalities such as mammography, ultrasound, MRI, and biopsy histopathological images are introduced. The foundation of CAD systems generally consist of four stages: Pre-processing, Segmentation, Feature extraction, and Classification. The approaches which applied to design different stages of CAD system are summarised. Advantages and disadvantages of different segmentation, feature extraction and classification techniques are listed. In addition, the impact of imbalanced datasets in classification outcomes and appropriate methods to solve these issues are discussed. As well as, performance evaluation metrics for various stages of breast cancer detection CAD systems are reviewed.
Keywords image retrieval in historical handwritten Arabic documents
NASA Astrophysics Data System (ADS)
Saabni, Raid; El-Sana, Jihad
2013-01-01
A system is presented for spotting and searching keywords in handwritten Arabic documents. A slightly modified dynamic time warping algorithm is used to measure similarities between words. Two sets of features are generated from the outer contour of the words/word-parts. The first set is based on the angles between nodes on the contour and the second set is based on the shape context features taken from the outer contour. To recognize a given word, the segmentation-free approach is partially adopted, i.e., continuous word parts are used as the basic alphabet, instead of individual characters or complete words. Additional strokes, such as dots and detached short segments, are classified and used in a postprocessing step to determine the final comparison decision. The search for a keyword is performed by the search for its word parts given in the correct order. The performance of the presented system was very encouraging in terms of efficiency and match rates. To evaluate the presented system its performance is compared to three different systems. Unfortunately, there are no publicly available standard datasets with ground truth for testing Arabic key word searching systems. Therefore, a private set of images partially taken from Juma'a Al-Majid Center in Dubai for evaluation is used, while using a slightly modified version of the IFN/ENIT database for training.
Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network
2018-01-01
Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved. PMID:29439500
Foundation and methodologies in computer-aided diagnosis systems for breast cancer detection
Jalalian, Afsaneh; Mashohor, Syamsiah; Mahmud, Rozi; Karasfi, Babak; Saripan, M. Iqbal B.; Ramli, Abdul Rahman B.
2017-01-01
Breast cancer is the most prevalent cancer that affects women all over the world. Early detection and treatment of breast cancer could decline the mortality rate. Some issues such as technical reasons, which related to imaging quality and human error, increase misdiagnosis of breast cancer by radiologists. Computer-aided detection systems (CADs) are developed to overcome these restrictions and have been studied in many imaging modalities for breast cancer detection in recent years. The CAD systems improve radiologists' performance in finding and discriminating between the normal and abnormal tissues. These procedures are performed only as a double reader but the absolute decisions are still made by the radiologist. In this study, the recent CAD systems for breast cancer detection on different modalities such as mammography, ultrasound, MRI, and biopsy histopathological images are introduced. The foundation of CAD systems generally consist of four stages: Pre-processing, Segmentation, Feature extraction, and Classification. The approaches which applied to design different stages of CAD system are summarised. Advantages and disadvantages of different segmentation, feature extraction and classification techniques are listed. In addition, the impact of imbalanced datasets in classification outcomes and appropriate methods to solve these issues are discussed. As well as, performance evaluation metrics for various stages of breast cancer detection CAD systems are reviewed. PMID:28435432
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
Retina Image Vessel Segmentation Using a Hybrid CGLI Level Set Method
Chen, Meizhu; Li, Jichun; Zhang, Encai
2017-01-01
As a nonintrusive method, the retina imaging provides us with a better way for the diagnosis of ophthalmologic diseases. Extracting the vessel profile automatically from the retina image is an important step in analyzing retina images. A novel hybrid active contour model is proposed to segment the fundus image automatically in this paper. It combines the signed pressure force function introduced by the Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) model with the local intensity property introduced by the Local Binary fitting (LBF) model to overcome the difficulty of the low contrast in segmentation process. It is more robust to the initial condition than the traditional methods and is easily implemented compared to the supervised vessel extraction methods. Proposed segmentation method was evaluated on two public datasets, DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (Structured Analysis of the Retina) (the average accuracy of 0.9390 with 0.7358 sensitivity and 0.9680 specificity on DRIVE datasets and average accuracy of 0.9409 with 0.7449 sensitivity and 0.9690 specificity on STARE datasets). The experimental results show that our method is effective and our method is also robust to some kinds of pathology images compared with the traditional level set methods. PMID:28840122
Subcortical structure segmentation using probabilistic atlas priors
NASA Astrophysics Data System (ADS)
Gouttard, Sylvain; Styner, Martin; Joshi, Sarang; Smith, Rachel G.; Cody Hazlett, Heather; Gerig, Guido
2007-03-01
The segmentation of the subcortical structures of the brain is required for many forms of quantitative neuroanatomic analysis. The volumetric and shape parameters of structures such as lateral ventricles, putamen, caudate, hippocampus, pallidus and amygdala are employed to characterize a disease or its evolution. This paper presents a fully automatic segmentation of these structures via a non-rigid registration of a probabilistic atlas prior and alongside a comprehensive validation. Our approach is based on an unbiased diffeomorphic atlas with probabilistic spatial priors built from a training set of MR images with corresponding manual segmentations. The atlas building computes an average image along with transformation fields mapping each training case to the average image. These transformation fields are applied to the manually segmented structures of each case in order to obtain a probabilistic map on the atlas. When applying the atlas for automatic structural segmentation, an MR image is first intensity inhomogeneity corrected, skull stripped and intensity calibrated to the atlas. Then the atlas image is registered to the image using an affine followed by a deformable registration matching the gray level intensity. Finally, the registration transformation is applied to the probabilistic maps of each structures, which are then thresholded at 0.5 probability. Using manual segmentations for comparison, measures of volumetric differences show high correlation with our results. Furthermore, the dice coefficient, which quantifies the volumetric overlap, is higher than 62% for all structures and is close to 80% for basal ganglia. The intraclass correlation coefficient computed on these same datasets shows a good inter-method correlation of the volumetric measurements. Using a dataset of a single patient scanned 10 times on 5 different scanners, reliability is shown with a coefficient of variance of less than 2 percents over the whole dataset. Overall, these validation and reliability studies show that our method accurately and reliably segments almost all structures. Only the hippocampus and amygdala segmentations exhibit relative low correlation with the manual segmentation in at least one of the validation studies, whereas they still show appropriate dice overlap coefficients.
PCA feature extraction for change detection in multidimensional unlabeled data.
Kuncheva, Ludmila I; Faithfull, William J
2014-01-01
When classifiers are deployed in real-world applications, it is assumed that the distribution of the incoming data matches the distribution of the data used to train the classifier. This assumption is often incorrect, which necessitates some form of change detection or adaptive classification. While there has been a lot of work on change detection based on the classification error monitored over the course of the operation of the classifier, finding changes in multidimensional unlabeled data is still a challenge. Here, we propose to apply principal component analysis (PCA) for feature extraction prior to the change detection. Supported by a theoretical example, we argue that the components with the lowest variance should be retained as the extracted features because they are more likely to be affected by a change. We chose a recently proposed semiparametric log-likelihood change detection criterion that is sensitive to changes in both mean and variance of the multidimensional distribution. An experiment with 35 datasets and an illustration with a simple video segmentation demonstrate the advantage of using extracted features compared to raw data. Further analysis shows that feature extraction through PCA is beneficial, specifically for data with multiple balanced classes.
Blood vessel-based liver segmentation through the portal phase of a CT dataset
NASA Astrophysics Data System (ADS)
Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Moriyama, Noriyuki; Utsunomiya, Toru; Shimada, Mitsuo
2013-02-01
Blood vessels are dispersed throughout the human body organs and carry unique information for each person. This information can be used to delineate organ boundaries. The proposed method relies on abdominal blood vessels (ABV) to segment the liver considering the potential presence of tumors through the portal phase of a CT dataset. ABV are extracted and classified into hepatic (HBV) and nonhepatic (non-HBV) with a small number of interactions. HBV and non-HBV are used to guide an automatic segmentation of the liver. HBV are used to individually segment the core region of the liver. This region and non-HBV are used to construct a boundary surface between the liver and other organs to separate them. The core region is classified based on extracted posterior distributions of its histogram into low intensity tumor (LIT) and non-LIT core regions. Non-LIT case includes normal part of liver, HBV, and high intensity tumors if exist. Each core region is extended based on its corresponding posterior distribution. Extension is completed when it reaches either a variation in intensity or the constructed boundary surface. The method was applied to 80 datasets (30 Medical Image Computing and Computer Assisted Intervention (MICCAI) and 50 non-MICCAI data) including 60 datasets with tumors. Our results for the MICCAI-test data were evaluated by sliver07 [1] with an overall score of 79.7, which ranks seventh best on the site (December 2013). This approach seems a promising method for extraction of liver volumetry of various shapes and sizes and low intensity hepatic tumors.
Quantifying the tibiofemoral joint space using x-ray tomosynthesis.
Kalinosky, Benjamin; Sabol, John M; Piacsek, Kelly; Heckel, Beth; Gilat Schmidt, Taly
2011-12-01
Digital x-ray tomosynthesis (DTS) has the potential to provide 3D information about the knee joint in a load-bearing posture, which may improve diagnosis and monitoring of knee osteoarthritis compared with projection radiography, the current standard of care. Manually quantifying and visualizing the joint space width (JSW) from 3D tomosynthesis datasets may be challenging. This work developed a semiautomated algorithm for quantifying the 3D tibiofemoral JSW from reconstructed DTS images. The algorithm was validated through anthropomorphic phantom experiments and applied to three clinical datasets. A user-selected volume of interest within the reconstructed DTS volume was enhanced with 1D multiscale gradient kernels. The edge-enhanced volumes were divided by polarity into tibial and femoral edge maps and combined across kernel scales. A 2D connected components algorithm was performed to determine candidate tibial and femoral edges. A 2D joint space width map (JSW) was constructed to represent the 3D tibiofemoral joint space. To quantify the algorithm accuracy, an adjustable knee phantom was constructed, and eleven posterior-anterior (PA) and lateral DTS scans were acquired with the medial minimum JSW of the phantom set to 0-5 mm in 0.5 mm increments (VolumeRad™, GE Healthcare, Chalfont St. Giles, United Kingdom). The accuracy of the algorithm was quantified by comparing the minimum JSW in a region of interest in the medial compartment of the JSW map to the measured phantom setting for each trial. In addition, the algorithm was applied to DTS scans of a static knee phantom and the JSW map compared to values estimated from a manually segmented computed tomography (CT) dataset. The algorithm was also applied to three clinical DTS datasets of osteoarthritic patients. The algorithm segmented the JSW and generated a JSW map for all phantom and clinical datasets. For the adjustable phantom, the estimated minimum JSW values were plotted against the measured values for all trials. A linear fit estimated a slope of 0.887 (R² = 0.962) and a mean error across all trials of 0.34 mm for the PA phantom data. The estimated minimum JSW values for the lateral adjustable phantom acquisitions were found to have low correlation to the measured values (R² = 0.377), with a mean error of 2.13 mm. The error in the lateral adjustable-phantom datasets appeared to be caused by artifacts due to unrealistic features in the phantom bones. JSW maps generated by DTS and CT varied by a mean of 0.6 mm and 0.8 mm across the knee joint, for PA and lateral scans. The tibial and femoral edges were successfully segmented and JSW maps determined for PA and lateral clinical DTS datasets. A semiautomated method is presented for quantifying the 3D joint space in a 2D JSW map using tomosynthesis images. The proposed algorithm quantified the JSW across the knee joint to sub-millimeter accuracy for PA tomosynthesis acquisitions. Overall, the results suggest that x-ray tomosynthesis may be beneficial for diagnosing and monitoring disease progression or treatment of osteoarthritis by providing quantitative images of JSW in the load-bearing knee.
Large-Scale Image Analytics Using Deep Learning
NASA Astrophysics Data System (ADS)
Ganguly, S.; Nemani, R. R.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.
2014-12-01
High resolution land cover classification maps are needed to increase the accuracy of current Land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) land cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agricultural Imaging Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with ~60 million pixels) and has a total size of ~100 terabytes for a single acquisition. Features extracted from the entire dataset would amount to ~8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. In order to perform image analytics in such a granular system, it is mandatory to devise an intelligent archiving and query system for image retrieval, file structuring, metadata processing and filtering of all available image scenes. Using the Open NASA Earth Exchange (NEX) initiative, which is a partnership with Amazon Web Services (AWS), we have developed an end-to-end architecture for designing the database and the deep belief network (following the distbelief computing model) to solve a grand challenge of scaling this process across quarter million NAIP tiles that cover the entire Continental United States. The AWS core components that we use to solve this problem are DynamoDB along with S3 for database query and storage, ElastiCache shared memory architecture for image segmentation, Elastic Map Reduce (EMR) for image feature extraction, and the memory optimized Elastic Cloud Compute (EC2) for the learning algorithm.
Semi supervised Learning of Feature Hierarchies for Object Detection in a Video (Open Access)
2013-10-03
dataset: PETS2009 Dataset, Oxford Town Center dataset [3], PNNL Parking Lot datasets [25] and CAVIAR cols1 dataset [1] for human detection. Be- sides, we...level features from TownCen- ter, ParkingLot, PETS09 and CAVIAR . As we can see that, the four set of features are visually very different from each other...information is more distinguished for detecting a person in the TownCen- ter than CAVIAR . Comparing figure 5(a) with 6(a), interest- ingly, the color
Gary, Robin H.; Wilson, Zachary D.; Archuleta, Christy-Ann M.; Thompson, Florence E.; Vrabel, Joseph
2009-01-01
During 2006-09, the U.S. Geological Survey, in cooperation with the National Atlas of the United States, produced a 1:1,000,000-scale (1:1M) hydrography dataset comprising streams and waterbodies for the entire United States, including Puerto Rico and the U.S. Virgin Islands, for inclusion in the recompiled National Atlas. This report documents the methods used to select, simplify, and refine features in the 1:100,000-scale (1:100K) (1:63,360-scale in Alaska) National Hydrography Dataset to create the national 1:1M hydrography dataset. Custom tools and semi-automated processes were created to facilitate generalization of the 1:100K National Hydrography Dataset (1:63,360-scale in Alaska) to 1:1M on the basis of existing small-scale hydrography datasets. The first step in creating the new 1:1M dataset was to address feature selection and optimal data density in the streams network. Several existing methods were evaluated. The production method that was established for selecting features for inclusion in the 1:1M dataset uses a combination of the existing attributes and network in the National Hydrography Dataset and several of the concepts from the methods evaluated. The process for creating the 1:1M waterbodies dataset required a similar approach to that used for the streams dataset. Geometric simplification of features was the next step. Stream reaches and waterbodies indicated in the feature selection process were exported as new feature classes and then simplified using a geographic information system tool. The final step was refinement of the 1:1M streams and waterbodies. Refinement was done through the use of additional geographic information system tools.
Benchmark for license plate character segmentation
NASA Astrophysics Data System (ADS)
Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson
2016-09-01
Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.
Chest wall segmentation in automated 3D breast ultrasound scans.
Tan, Tao; Platel, Bram; Mann, Ritse M; Huisman, Henkjan; Karssemeijer, Nico
2013-12-01
In this paper, we present an automatic method to segment the chest wall in automated 3D breast ultrasound images. Determining the location of the chest wall in automated 3D breast ultrasound images is necessary in computer-aided detection systems to remove automatically detected cancer candidates beyond the chest wall and it can be of great help for inter- and intra-modal image registration. We show that the visible part of the chest wall in an automated 3D breast ultrasound image can be accurately modeled by a cylinder. We fit the surface of our cylinder model to a set of automatically detected rib-surface points. The detection of the rib-surface points is done by a classifier using features representing local image intensity patterns and presence of rib shadows. Due to attenuation of the ultrasound signal, a clear shadow is visible behind the ribs. Evaluation of our segmentation method is done by computing the distance of manually annotated rib points to the surface of the automatically detected chest wall. We examined the performance on images obtained with the two most common 3D breast ultrasound devices in the market. In a dataset of 142 images, the average mean distance of the annotated points to the segmented chest wall was 5.59 ± 3.08 mm. Copyright © 2012 Elsevier B.V. All rights reserved.
2D/3D fetal cardiac dataset segmentation using a deformable model.
Dindoyal, Irving; Lambrou, Tryphon; Deng, Jing; Todd-Pokropek, Andrew
2011-07-01
To segment the fetal heart in order to facilitate the 3D assessment of the cardiac function and structure. Ultrasound acquisition typically results in drop-out artifacts of the chamber walls. The authors outline a level set deformable model to automatically delineate the small fetal cardiac chambers. The level set is penalized from growing into an adjacent cardiac compartment using a novel collision detection term. The region based model allows simultaneous segmentation of all four cardiac chambers from a user defined seed point placed in each chamber. The segmented boundaries are automatically penalized from intersecting at walls with signal dropout. Root mean square errors of the perpendicular distances between the algorithm's delineation and manual tracings are within 2 mm which is less than 10% of the length of a typical fetal heart. The ejection fractions were determined from the 3D datasets. We validate the algorithm using a physical phantom and obtain volumes that are comparable to those from physically determined means. The algorithm segments volumes with an error of within 13% as determined using a physical phantom. Our original work in fetal cardiac segmentation compares automatic and manual tracings to a physical phantom and also measures inter observer variation.
Robust pulmonary lobe segmentation against incomplete fissures
NASA Astrophysics Data System (ADS)
Gu, Suicheng; Zheng, Qingfeng; Siegfried, Jill; Pu, Jiantao
2012-03-01
As important anatomical landmarks of the human lung, accurate lobe segmentation may be useful for characterizing specific lung diseases (e.g., inflammatory, granulomatous, and neoplastic diseases). A number of investigations showed that pulmonary fissures were often incomplete in image depiction, thereby leading to the computerized identification of individual lobes a challenging task. Our purpose is to develop a fully automated algorithm for accurate identification of individual lobes regardless of the integrity of pulmonary fissures. The underlying idea of the developed lobe segmentation scheme is to use piecewise planes to approximate the detected fissures. After a rotation and a global smoothing, a number of small planes were fitted using local fissures points. The local surfaces are finally combined for lobe segmentation using a quadratic B-spline weighting strategy to assure that the segmentation is smooth. The performance of the developed scheme was assessed by comparing with a manually created reference standard on a dataset of 30 lung CT examinations. These examinations covered a number of lung diseases and were selected from a large chronic obstructive pulmonary disease (COPD) dataset. The results indicate that our scheme of lobe segmentation is efficient and accurate against incomplete fissures.
Microaneurysm detection with radon transform-based classification on retina images.
Giancardo, L; Meriaudeau, F; Karnowski, T P; Li, Y; Tobin, K W; Chaum, E
2011-01-01
The creation of an automatic diabetic retinopathy screening system using retina cameras is currently receiving considerable interest in the medical imaging community. The detection of microaneurysms is a key element in this effort. In this work, we propose a new microaneurysms segmentation technique based on a novel application of the radon transform, which is able to identify these lesions without any previous knowledge of the retina morphological features and with minimal image preprocessing. The algorithm has been evaluated on the Retinopathy Online Challenge public dataset, and its performance compares with the best current techniques. The performance is particularly good at low false positive ratios, which makes it an ideal candidate for diabetic retinopathy screening systems.
Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram
2016-01-01
The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321
Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam
2012-01-15
Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.
NASA Astrophysics Data System (ADS)
Patel, Ajay; van de Leemput, Sil C.; Prokop, Mathias; van Ginneken, Bram; Manniesing, Rashindra
2017-03-01
Segmentation of anatomical structures is fundamental in the development of computer aided diagnosis systems for cerebral pathologies. Manual annotations are laborious, time consuming and subject to human error and observer variability. Accurate quantification of cerebrospinal fluid (CSF) can be employed as a morphometric measure for diagnosis and patient outcome prediction. However, segmenting CSF in non-contrast CT images is complicated by low soft tissue contrast and image noise. In this paper we propose a state-of-the-art method using a multi-scale three-dimensional (3D) fully convolutional neural network (CNN) to automatically segment all CSF within the cranial cavity. The method is trained on a small dataset comprised of four manually annotated cerebral CT images. Quantitative evaluation of a separate test dataset of four images shows a mean Dice similarity coefficient of 0.87 +/- 0.01 and mean absolute volume difference of 4.77 +/- 2.70 %. The average prediction time was 68 seconds. Our method allows for fast and fully automated 3D segmentation of cerebral CSF in non-contrast CT, and shows promising results despite a limited amount of training data.
Chiu, Stephanie J; Toth, Cynthia A; Bowes Rickman, Catherine; Izatt, Joseph A; Farsiu, Sina
2012-05-01
This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique.
Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina
2012-01-01
This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602
Automated analysis of brain activity for seizure detection in zebrafish models of epilepsy.
Hunyadi, Borbála; Siekierska, Aleksandra; Sourbron, Jo; Copmans, Daniëlle; de Witte, Peter A M
2017-08-01
Epilepsy is a chronic neurological condition, with over 30% of cases unresponsive to treatment. Zebrafish larvae show great potential to serve as an animal model of epilepsy in drug discovery. Thanks to their high fecundity and relatively low cost, they are amenable to high-throughput screening. However, the assessment of seizure occurrences in zebrafish larvae remains a bottleneck, as visual analysis is subjective and time-consuming. For the first time, we present an automated algorithm to detect epileptic discharges in single-channel local field potential (LFP) recordings in zebrafish. First, candidate seizure segments are selected based on their energy and length. Afterwards, discriminative features are extracted from each segment. Using a labeled dataset, a support vector machine (SVM) classifier is trained to learn an optimal feature mapping. Finally, this SVM classifier is used to detect seizure segments in new signals. We tested the proposed algorithm both in a chemically-induced seizure model and a genetic epilepsy model. In both cases, the algorithm delivered similar results to visual analysis and found a significant difference in number of seizures between the epileptic and control group. Direct comparison with multichannel techniques or methods developed for different animal models is not feasible. Nevertheless, a literature review shows that our algorithm outperforms state-of-the-art techniques in terms of accuracy, precision and specificity, while maintaining a reasonable sensitivity. Our seizure detection system is a generic, time-saving and objective method to analyze zebrafish LPF, which can replace visual analysis and facilitate true high-throughput studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Segmentation-less Digital Rock Physics
NASA Astrophysics Data System (ADS)
Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.
2017-12-01
In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.
Consistent cortical reconstruction and multi-atlas brain segmentation.
Huo, Yuankai; Plassard, Andrew J; Carass, Aaron; Resnick, Susan M; Pham, Dzung L; Prince, Jerry L; Landman, Bennett A
2016-09-01
Whole brain segmentation and cortical surface reconstruction are two essential techniques for investigating the human brain. Spatial inconsistences, which can hinder further integrated analyses of brain structure, can result due to these two tasks typically being conducted independently of each other. FreeSurfer obtains self-consistent whole brain segmentations and cortical surfaces. It starts with subcortical segmentation, then carries out cortical surface reconstruction, and ends with cortical segmentation and labeling. However, this "segmentation to surface to parcellation" strategy has shown limitations in various cohorts such as older populations with large ventricles. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. A modification called MaCRUISE(+) is designed to perform well when white matter lesions are present. Comparing to the benchmarks CRUISE and FreeSurfer, the surface accuracy of MaCRUISE and MaCRUISE(+) is validated using two independent datasets with expertly placed cortical landmarks. A third independent dataset with expertly delineated volumetric labels is employed to compare segmentation performance. Finally, 200MR volumetric images from an older adult sample are used to assess the robustness of MaCRUISE and FreeSurfer. The advantages of MaCRUISE are: (1) MaCRUISE constructs self-consistent voxelwise segmentations and cortical surfaces, while MaCRUISE(+) is robust to white matter pathology. (2) MaCRUISE achieves more accurate whole brain segmentations than independently conducting the multi-atlas segmentation. (3) MaCRUISE is comparable in accuracy to FreeSurfer (when FreeSurfer does not exhibit global failures) while achieving greater robustness across an older adult population. MaCRUISE has been made freely available in open source. Copyright © 2016 Elsevier Inc. All rights reserved.
Freiman, Moti; Nickisch, Hannes; Prevrhal, Sven; Schmitt, Holger; Vembar, Mani; Maurovich-Horvat, Pál; Donnelly, Patrick; Goshen, Liran
2017-03-01
The goal of this study was to assess the potential added benefit of accounting for partial volume effects (PVE) in an automatic coronary lumen segmentation algorithm that is used to determine the hemodynamic significance of a coronary artery stenosis from coronary computed tomography angiography (CCTA). Two sets of data were used in our work: (a) multivendor CCTA datasets of 18 subjects from the MICCAI 2012 challenge with automatically generated centerlines and 3 reference segmentations of 78 coronary segments and (b) additional CCTA datasets of 97 subjects with 132 coronary lesions that had invasive reference standard FFR measurements. We extracted the coronary artery centerlines for the 97 datasets by an automated software program followed by manual correction if required. An automatic machine-learning-based algorithm segmented the coronary tree with and without accounting for the PVE. We obtained CCTA-based FFR measurements using a flow simulation in the coronary trees that were generated by the automatic algorithm with and without accounting for PVE. We assessed the potential added value of PVE integration as a part of the automatic coronary lumen segmentation algorithm by means of segmentation accuracy using the MICCAI 2012 challenge framework and by means of flow simulation overall accuracy, sensitivity, specificity, negative and positive predictive values, and the receiver operated characteristic (ROC) area under the curve. We also evaluated the potential benefit of accounting for PVE in automatic segmentation for flow simulation for lesions that were diagnosed as obstructive based on CCTA which could have indicated a need for an invasive exam and revascularization. Our segmentation algorithm improves the maximal surface distance error by ~39% compared to previously published method on the 18 datasets from the MICCAI 2012 challenge with comparable Dice and mean surface distance. Results with and without accounting for PVE were comparable. In contrast, integrating PVE analysis into an automatic coronary lumen segmentation algorithm improved the flow simulation specificity from 0.6 to 0.68 with the same sensitivity of 0.83. Also, accounting for PVE improved the area under the ROC curve for detecting hemodynamically significant CAD from 0.76 to 0.8 compared to automatic segmentation without PVE analysis with invasive FFR threshold of 0.8 as the reference standard. Accounting for PVE in flow simulation to support the detection of hemodynamic significant disease in CCTA-based obstructive lesions improved specificity from 0.51 to 0.73 with same sensitivity of 0.83 and the area under the curve from 0.69 to 0.79. The improvement in the AUC was statistically significant (N = 76, Delong's test, P = 0.012). Accounting for the partial volume effects in automatic coronary lumen segmentation algorithms has the potential to improve the accuracy of CCTA-based hemodynamic assessment of coronary artery lesions. © 2017 American Association of Physicists in Medicine.
AISLE: an automatic volumetric segmentation method for the study of lung allometry.
Ren, Hongliang; Kazanzides, Peter
2011-01-01
We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.
WRIST: A WRist Image Segmentation Toolkit for carpal bone delineation from MRI.
Foster, Brent; Joshi, Anand A; Borgese, Marissa; Abdelhafez, Yasser; Boutin, Robert D; Chaudhari, Abhijit J
2018-01-01
Segmentation of the carpal bones from 3D imaging modalities, such as magnetic resonance imaging (MRI), is commonly performed for in vivo analysis of wrist morphology, kinematics, and biomechanics. This crucial task is typically carried out manually and is labor intensive, time consuming, subject to high inter- and intra-observer variability, and may result in topologically incorrect surfaces. We present a method, WRist Image Segmentation Toolkit (WRIST), for 3D semi-automated, rapid segmentation of the carpal bones of the wrist from MRI. In our method, the boundary of the bones were iteratively found using prior known anatomical constraints and a shape-detection level set. The parameters of the method were optimized using a training dataset of 48 manually segmented carpal bones and evaluated on 112 carpal bones which included both healthy participants without known wrist conditions and participants with thumb basilar osteoarthritis (OA). Manual segmentation by two expert human observers was considered as a reference. On the healthy subject dataset we obtained a Dice overlap of 93.0 ± 3.8, Jaccard Index of 87.3 ± 6.2, and a Hausdorff distance of 2.7 ± 3.4 mm, while on the OA dataset we obtained a Dice overlap of 90.7 ± 8.6, Jaccard Index of 83.0 ± 10.6, and a Hausdorff distance of 4.0 ± 4.4 mm. The short computational time of 20.8 s per bone (or 5.1 s per bone in the parallelized version) and the high agreement with the expert observers gives WRIST the potential to be utilized in musculoskeletal research. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kargarfard, Fatemeh; Sami, Ashkan; Mohammadi-Dehcheshmeh, Manijeh; Ebrahimie, Esmaeil
2016-11-16
Recent (2013 and 2009) zoonotic transmission of avian or porcine influenza to humans highlights an increase in host range by evading species barriers. Gene reassortment or antigenic shift between viruses from two or more hosts can generate a new life-threatening virus when the new shuffled virus is no longer recognized by antibodies existing within human populations. There is no large scale study to help understand the underlying mechanisms of host transmission. Furthermore, there is no clear understanding of how different segments of the influenza genome contribute in the final determination of host range. To obtain insight into the rules underpinning host range determination, various supervised machine learning algorithms were employed to mine reassortment changes in different viral segments in a range of hosts. Our multi-host dataset contained whole segments of 674 influenza strains organized into three host categories: avian, human, and swine. Some of the sequences were assigned to multiple hosts. In point of fact, the datasets are a form of multi-labeled dataset and we utilized a multi-label learning method to identify discriminative sequence sites. Then algorithms such as CBA, Ripper, and decision tree were applied to extract informative and descriptive association rules for each viral protein segment. We found informative rules in all segments that are common within the same host class but varied between different hosts. For example, for infection of an avian host, HA14V and NS1230S were the most important discriminative and combinatorial positions. Host range identification is facilitated by high support combined rules in this study. Our major goal was to detect discriminative genomic positions that were able to identify multi host viruses, because such viruses are likely to cause pandemic or disastrous epidemics.
Automatic multi-organ segmentation using learning-based segmentation and level set optimization.
Kohlberger, Timo; Sofka, Michal; Zhang, Jingdan; Birkbeck, Neil; Wetzl, Jens; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin
2011-01-01
We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.
Vigneault, Davis M; Xie, Weidi; Ho, Carolyn Y; Bluemke, David A; Noble, J Alison
2018-05-22
Pixelwise segmentation of the left ventricular (LV) myocardium and the four cardiac chambers in 2-D steady state free precession (SSFP) cine sequences is an essential preprocessing step for a wide range of analyses. Variability in contrast, appearance, orientation, and placement of the heart between patients, clinical views, scanners, and protocols makes fully automatic semantic segmentation a notoriously difficult problem. Here, we present Ω-Net (Omega-Net): A novel convolutional neural network (CNN) architecture for simultaneous localization, transformation into a canonical orientation, and semantic segmentation. First, an initial segmentation is performed on the input image; second, the features learned during this initial segmentation are used to predict the parameters needed to transform the input image into a canonical orientation; and third, a final segmentation is performed on the transformed image. In this work, Ω-Nets of varying depths were trained to detect five foreground classes in any of three clinical views (short axis, SA; four-chamber, 4C; two-chamber, 2C), without prior knowledge of the view being segmented. This constitutes a substantially more challenging problem compared with prior work. The architecture was trained using three-fold cross-validation on a cohort of patients with hypertrophic cardiomyopathy (HCM, N=42) and healthy control subjects (N=21). Network performance, as measured by weighted foreground intersection-over-union (IoU), was substantially improved for the best-performing Ω-Net compared with U-Net segmentation without localization or orientation (0.858 vs 0.834). In addition, to be comparable with other works, Ω-Net was retrained from scratch using five-fold cross-validation on the publicly available 2017 MICCAI Automated Cardiac Diagnosis Challenge (ACDC) dataset. The Ω-Net outperformed the state-of-the-art method in segmentation of the LV and RV bloodpools, and performed slightly worse in segmentation of the LV myocardium. We conclude that this architecture represents a substantive advancement over prior approaches, with implications for biomedical image segmentation more generally. Published by Elsevier B.V.
The first MICCAI challenge on PET tumor segmentation.
Hatt, Mathieu; Laurent, Baptiste; Ouahabi, Anouar; Fayad, Hadi; Tan, Shan; Li, Laquan; Lu, Wei; Jaouen, Vincent; Tauber, Clovis; Czakon, Jakub; Drapejkowski, Filip; Dyrka, Witold; Camarasu-Pop, Sorina; Cervenansky, Frédéric; Girard, Pascal; Glatard, Tristan; Kain, Michael; Yao, Yao; Barillot, Christian; Kirov, Assen; Visvikis, Dimitris
2018-02-01
Automatic functional volume segmentation in PET images is a challenge that has been addressed using a large array of methods. A major limitation for the field has been the lack of a benchmark dataset that would allow direct comparison of the results in the various publications. In the present work, we describe a comparison of recent methods on a large dataset following recommendations by the American Association of Physicists in Medicine (AAPM) task group (TG) 211, which was carried out within a MICCAI (Medical Image Computing and Computer Assisted Intervention) challenge. Organization and funding was provided by France Life Imaging (FLI). A dataset of 176 images combining simulated, phantom and clinical images was assembled. A website allowed the participants to register and download training data (n = 19). Challengers then submitted encapsulated pipelines on an online platform that autonomously ran the algorithms on the testing data (n = 157) and evaluated the results. The methods were ranked according to the arithmetic mean of sensitivity and positive predictive value. Sixteen teams registered but only four provided manuscripts and pipeline(s) for a total of 10 methods. In addition, results using two thresholds and the Fuzzy Locally Adaptive Bayesian (FLAB) were generated. All competing methods except one performed with median accuracy above 0.8. The method with the highest score was the convolutional neural network-based segmentation, which significantly outperformed 9 out of 12 of the other methods, but not the improved K-Means, Gaussian Model Mixture and Fuzzy C-Means methods. The most rigorous comparative study of PET segmentation algorithms to date was carried out using a dataset that is the largest used in such studies so far. The hierarchy amongst the methods in terms of accuracy did not depend strongly on the subset of datasets or the metrics (or combination of metrics). All the methods submitted by the challengers except one demonstrated good performance with median accuracy scores above 0.8. Copyright © 2017 Elsevier B.V. All rights reserved.
Segmentation and Visual Analysis of Whole-Body Mouse Skeleton microSPECT
Khmelinskii, Artem; Groen, Harald C.; Baiker, Martin; de Jong, Marion; Lelieveldt, Boudewijn P. F.
2012-01-01
Whole-body SPECT small animal imaging is used to study cancer, and plays an important role in the development of new drugs. Comparing and exploring whole-body datasets can be a difficult and time-consuming task due to the inherent heterogeneity of the data (high volume/throughput, multi-modality, postural and positioning variability). The goal of this study was to provide a method to align and compare side-by-side multiple whole-body skeleton SPECT datasets in a common reference, thus eliminating acquisition variability that exists between the subjects in cross-sectional and multi-modal studies. Six whole-body SPECT/CT datasets of BALB/c mice injected with bone targeting tracers 99mTc-methylene diphosphonate (99mTc-MDP) and 99mTc-hydroxymethane diphosphonate (99mTc-HDP) were used to evaluate the proposed method. An articulated version of the MOBY whole-body mouse atlas was used as a common reference. Its individual bones were registered one-by-one to the skeleton extracted from the acquired SPECT data following an anatomical hierarchical tree. Sequential registration was used while constraining the local degrees of freedom (DoFs) of each bone in accordance to the type of joint and its range of motion. The Articulated Planar Reformation (APR) algorithm was applied to the segmented data for side-by-side change visualization and comparison of data. To quantitatively evaluate the proposed algorithm, bone segmentations of extracted skeletons from the correspondent CT datasets were used. Euclidean point to surface distances between each dataset and the MOBY atlas were calculated. The obtained results indicate that after registration, the mean Euclidean distance decreased from 11.5±12.1 to 2.6±2.1 voxels. The proposed approach yielded satisfactory segmentation results with minimal user intervention. It proved to be robust for “incomplete” data (large chunks of skeleton missing) and for an intuitive exploration and comparison of multi-modal SPECT/CT cross-sectional mouse data. PMID:23152834
Hard exudates segmentation based on learned initial seeds and iterative graph cut.
Kusakunniran, Worapan; Wu, Qiang; Ritthipravat, Panrasee; Zhang, Jian
2018-05-01
(Background and Objective): The occurrence of hard exudates is one of the early signs of diabetic retinopathy which is one of the leading causes of the blindness. Many patients with diabetic retinopathy lose their vision because of the late detection of the disease. Thus, this paper is to propose a novel method of hard exudates segmentation in retinal images in an automatic way. (Methods): The existing methods are based on either supervised or unsupervised learning techniques. In addition, the learned segmentation models may often cause miss-detection and/or fault-detection of hard exudates, due to the lack of rich characteristics, the intra-variations, and the similarity with other components in the retinal image. Thus, in this paper, the supervised learning based on the multilayer perceptron (MLP) is only used to identify initial seeds with high confidences to be hard exudates. Then, the segmentation is finalized by unsupervised learning based on the iterative graph cut (GC) using clusters of initial seeds. Also, in order to reduce color intra-variations of hard exudates in different retinal images, the color transfer (CT) is applied to normalize their color information, in the pre-processing step. (Results): The experiments and comparisons with the other existing methods are based on the two well-known datasets, e_ophtha EX and DIARETDB1. It can be seen that the proposed method outperforms the other existing methods in the literature, with the sensitivity in the pixel-level of 0.891 for the DIARETDB1 dataset and 0.564 for the e_ophtha EX dataset. The cross datasets validation where the training process is performed on one dataset and the testing process is performed on another dataset is also evaluated in this paper, in order to illustrate the robustness of the proposed method. (Conclusions): This newly proposed method integrates the supervised learning and unsupervised learning based techniques. It achieves the improved performance, when compared with the existing methods in the literature. The robustness of the proposed method for the scenario of cross datasets could enhance its practical usage. That is, the trained model could be more practical for unseen data in the real-world situation, especially when the capturing environments of training and testing images are not the same. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Daryanani, Aditya; Dangi, Shusil; Ben-Zikri, Yehuda Kfir; Linte, Cristian A.
2016-03-01
Magnetic Resonance Imaging (MRI) is a standard-of-care imaging modality for cardiac function assessment and guidance of cardiac interventions thanks to its high image quality and lack of exposure to ionizing radiation. Cardiac health parameters such as left ventricular volume, ejection fraction, myocardial mass, thickness, and strain can be assessed by segmenting the heart from cardiac MRI images. Furthermore, the segmented pre-operative anatomical heart models can be used to precisely identify regions of interest to be treated during minimally invasive therapy. Hence, the use of accurate and computationally efficient segmentation techniques is critical, especially for intra-procedural guidance applications that rely on the peri-operative segmentation of subject-specific datasets without delaying the procedure workflow. Atlas-based segmentation incorporates prior knowledge of the anatomy of interest from expertly annotated image datasets. Typically, the ground truth atlas label is propagated to a test image using a combination of global and local registration. The high computational cost of non-rigid registration motivated us to obtain an initial segmentation using global transformations based on an atlas of the left ventricle from a population of patient MRI images and refine it using well developed technique based on graph cuts. Here we quantitatively compare the segmentations obtained from the global and global plus local atlases and refined using graph cut-based techniques with the expert segmentations according to several similarity metrics, including Dice correlation coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.
Nyholm, Tufve; Svensson, Stina; Andersson, Sebastian; Jonsson, Joakim; Sohlin, Maja; Gustafsson, Christian; Kjellén, Elisabeth; Söderström, Karin; Albertsson, Per; Blomqvist, Lennart; Zackrisson, Björn; Olsson, Lars E; Gunnlaugsson, Adalsteinn
2018-03-01
We describe a public dataset with MR and CT images of patients performed in the same position with both multiobserver and expert consensus delineations of relevant organs in the male pelvic region. The purpose was to provide means for training and validation of segmentation algorithms and methods to convert MR to CT like data, i.e., so called synthetic CT (sCT). T1- and T2-weighted MR images as well as CT data were collected for 19 patients at three different departments. Five experts delineated nine organs for each patient based on the T2-weighted MR images. An automatic method was used to fuse the delineations. Starting from each fused delineation, a consensus delineation was agreed upon by the five experts for each organ and patient. Segmentation overlap between user delineations with respect to the consensus delineations was measured to describe the spread of the collected data. Finally, an open-source software was used to create deformation vector fields describing the relation between MR and CT images to further increase the usability of the dataset. The dataset has been made publically available to be used for academic purposes, and can be accessed from https://zenodo.org/record/583096. The dataset provides a useful source for training and validation of segmentation algorithms as well as methods to convert MR to CT-like data (sCT). To give some examples: The T2-weighted MR images with their consensus delineations can directly be used as a template in an existing atlas-based segmentation engine; the expert delineations are useful to validate the performance of a segmentation algorithm as they provide a way to measure variability among users which can be compared with the result of an automatic segmentation; and the pairwise deformably registered MR and CT images can be a source for an atlas-based sCT algorithm or for validation of sCT algorithm. © 2018 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Jialin, E-mail: 2004pjl@163.com; Zhang, Hongbo; Hu, Peijun
Purpose: Efficient and accurate 3D liver segmentations from contrast-enhanced computed tomography (CT) images play an important role in therapeutic strategies for hepatic diseases. However, inhomogeneous appearances, ambiguous boundaries, and large variance in shape often make it a challenging task. The existence of liver abnormalities poses further difficulty. Despite the significant intensity difference, liver tumors should be segmented as part of the liver. This study aims to address these challenges, especially when the target livers contain subregions with distinct appearances. Methods: The authors propose a novel multiregion-appearance based approach with graph cuts to delineate the liver surface. For livers with multiplemore » subregions, a geodesic distance based appearance selection scheme is introduced to utilize proper appearance constraint for each subregion. A special case of the proposed method, which uses only one appearance constraint to segment the liver, is also presented. The segmentation process is modeled with energy functions incorporating both boundary and region information. Rather than a simple fixed combination, an adaptive balancing weight is introduced and learned from training sets. The proposed method only calls initialization inside the liver surface. No additional constraints from user interaction are utilized. Results: The proposed method was validated on 50 3D CT images from three datasets, i.e., Medical Image Computing and Computer Assisted Intervention (MICCAI) training and testing set, and local dataset. On MICCAI testing set, the proposed method achieved a total score of 83.4 ± 3.1, outperforming nonexpert manual segmentation (average score of 75.0). When applying their method to MICCAI training set and local dataset, it yielded a mean Dice similarity coefficient (DSC) of 97.7% ± 0.5% and 97.5% ± 0.4%, respectively. These results demonstrated the accuracy of the method when applied to different computed tomography (CT) datasets. In addition, user operator variability experiments showed its good reproducibility. Conclusions: A multiregion-appearance based method is proposed and evaluated to segment liver. This approach does not require prior model construction and so eliminates the burdens associated with model construction and matching. The proposed method provides comparable results with state-of-the-art methods. Validation results suggest that it may be suitable for the clinical use.« less
Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer
2013-10-01
The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV segments classified by the LD classifier. A combination of linear/nonlinear features from HRV signals is effective in automatic sleep staging. Moreover, time-frequency features are more informative than others. In addition, a separability measure and classification results showed that HRV signal features, especially nonlinear features, extracted from 5-min segments are more discriminative than those from 0.5-min segments in automatic sleep staging. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Atlas Toolkit: Fast registration of 3D morphological datasets in the absence of landmarks
Grocott, Timothy; Thomas, Paul; Münsterberg, Andrea E.
2016-01-01
Image registration is a gateway technology for Developmental Systems Biology, enabling computational analysis of related datasets within a shared coordinate system. Many registration tools rely on landmarks to ensure that datasets are correctly aligned; yet suitable landmarks are not present in many datasets. Atlas Toolkit is a Fiji/ImageJ plugin collection offering elastic group-wise registration of 3D morphological datasets, guided by segmentation of the interesting morphology. We demonstrate the method by combinatorial mapping of cell signalling events in the developing eyes of chick embryos, and use the integrated datasets to predictively enumerate Gene Regulatory Network states. PMID:26864723
Atlas Toolkit: Fast registration of 3D morphological datasets in the absence of landmarks.
Grocott, Timothy; Thomas, Paul; Münsterberg, Andrea E
2016-02-11
Image registration is a gateway technology for Developmental Systems Biology, enabling computational analysis of related datasets within a shared coordinate system. Many registration tools rely on landmarks to ensure that datasets are correctly aligned; yet suitable landmarks are not present in many datasets. Atlas Toolkit is a Fiji/ImageJ plugin collection offering elastic group-wise registration of 3D morphological datasets, guided by segmentation of the interesting morphology. We demonstrate the method by combinatorial mapping of cell signalling events in the developing eyes of chick embryos, and use the integrated datasets to predictively enumerate Gene Regulatory Network states.
An automated method for accurate vessel segmentation.
Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; Cheng, Kwang-Ting Tim
2017-05-07
Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm's growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008 European Conf. on Computer Vision; Law and Chung 2009 IEEE Trans. Image Process. 18 596-612; Wang 2015 J. Neurosci. Methods 241 30-6) with manually optimized parameters. Our system has also been applied clinically for cerebral aneurysm development analysis. Experimental results on 10 patients' data, with two 3D CT scans per patient, show that our system's automatic diagnosis outcomes are consistent with clinicians' manual measurements.
An automated method for accurate vessel segmentation
NASA Astrophysics Data System (ADS)
Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting
2017-05-01
Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008 European Conf. on Computer Vision; Law and Chung 2009 IEEE Trans. Image Process. 18 596-612; Wang 2015 J. Neurosci. Methods 241 30-6) with manually optimized parameters. Our system has also been applied clinically for cerebral aneurysm development analysis. Experimental results on 10 patients’ data, with two 3D CT scans per patient, show that our system’s automatic diagnosis outcomes are consistent with clinicians’ manual measurements.
PANTHER. Pattern ANalytics To support High-performance Exploitation and Reasoning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czuchlewski, Kristina Rodriguez; Hart, William E.
Sandia has approached the analysis of big datasets with an integrated methodology that uses computer science, image processing, and human factors to exploit critical patterns and relationships in large datasets despite the variety and rapidity of information. The work is part of a three-year LDRD Grand Challenge called PANTHER (Pattern ANalytics To support High-performance Exploitation and Reasoning). To maximize data analysis capability, Sandia pursued scientific advances across three key technical domains: (1) geospatial-temporal feature extraction via image segmentation and classification; (2) geospatial-temporal analysis capabilities tailored to identify and process new signatures more efficiently; and (3) domain- relevant models of humanmore » perception and cognition informing the design of analytic systems. Our integrated results include advances in geographical information systems (GIS) in which we discover activity patterns in noisy, spatial-temporal datasets using geospatial-temporal semantic graphs. We employed computational geometry and machine learning to allow us to extract and predict spatial-temporal patterns and outliers from large aircraft and maritime trajectory datasets. We automatically extracted static and ephemeral features from real, noisy synthetic aperture radar imagery for ingestion into a geospatial-temporal semantic graph. We worked with analysts and investigated analytic workflows to (1) determine how experiential knowledge evolves and is deployed in high-demand, high-throughput visual search workflows, and (2) better understand visual search performance and attention. Through PANTHER, Sandia's fundamental rethinking of key aspects of geospatial data analysis permits the extraction of much richer information from large amounts of data. The project results enable analysts to examine mountains of historical and current data that would otherwise go untouched, while also gaining meaningful, measurable, and defensible insights into overlooked relationships and patterns. The capability is directly relevant to the nation's nonproliferation remote-sensing activities and has broad national security applications for military and intelligence- gathering organizations.« less
Hypergraph Based Feature Selection Technique for Medical Diagnosis.
Somu, Nivethitha; Raman, M R Gauthama; Kirthivasan, Kannan; Sriram, V S Shankar
2016-11-01
The impact of internet and information systems across various domains have resulted in substantial generation of multidimensional datasets. The use of data mining and knowledge discovery techniques to extract the original information contained in the multidimensional datasets play a significant role in the exploitation of complete benefit provided by them. The presence of large number of features in the high dimensional datasets incurs high computational cost in terms of computing power and time. Hence, feature selection technique has been commonly used to build robust machine learning models to select a subset of relevant features which projects the maximal information content of the original dataset. In this paper, a novel Rough Set based K - Helly feature selection technique (RSKHT) which hybridize Rough Set Theory (RST) and K - Helly property of hypergraph representation had been designed to identify the optimal feature subset or reduct for medical diagnostic applications. Experiments carried out using the medical datasets from the UCI repository proves the dominance of the RSKHT over other feature selection techniques with respect to the reduct size, classification accuracy and time complexity. The performance of the RSKHT had been validated using WEKA tool, which shows that RSKHT had been computationally attractive and flexible over massive datasets.
Sampling effects on the identification of roadkill hotspots: Implications for survey design.
Santos, Sara M; Marques, J Tiago; Lourenço, André; Medinas, Denis; Barbosa, A Márcia; Beja, Pedro; Mira, António
2015-10-01
Although locating wildlife roadkill hotspots is essential to mitigate road impacts, the influence of study design on hotspot identification remains uncertain. We evaluated how sampling frequency affects the accuracy of hotspot identification, using a dataset of vertebrate roadkills (n = 4427) recorded over a year of daily surveys along 37 km of roads. "True" hotspots were identified using this baseline dataset, as the 500-m segments where the number of road-killed vertebrates exceeded the upper 95% confidence limit of the mean, assuming a Poisson distribution of road-kills per segment. "Estimated" hotspots were identified likewise, using datasets representing progressively lower sampling frequencies, which were produced by extracting data from the baseline dataset at appropriate time intervals (1-30 days). Overall, 24.3% of segments were "true" hotspots, concentrating 40.4% of roadkills. For different groups, "true" hotspots accounted from 6.8% (bats) to 29.7% (small birds) of road segments, concentrating from <40% (frogs and toads, snakes) to >60% (lizards, lagomorphs, carnivores) of roadkills. Spatial congruence between "true" and "estimated" hotspots declined rapidly with increasing time interval between surveys, due primarily to increasing false negatives (i.e., missing "true" hotspots). There were also false positives (i.e., wrong "estimated" hotspots), particularly at low sampling frequencies. Spatial accuracy decay with increasing time interval between surveys was higher for smaller-bodied (amphibians, reptiles, small birds, small mammals) than for larger-bodied species (birds of prey, hedgehogs, lagomorphs, carnivores). Results suggest that widely used surveys at weekly or longer intervals may produce poor estimates of roadkill hotspots, particularly for small-bodied species. Surveying daily or at two-day intervals may be required to achieve high accuracy in hotspot identification for multiple species. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Baochun; Huang, Cheng; Zhou, Shoujun
Purpose: A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. Methods: The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-levelmore » active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods—3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration—are used to establish shape correspondence. Results: The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. Conclusions: The proposed automatic approach achieves robust, accurate, and fast liver segmentation for 3D CTce datasets. The AdaBoost voxel classifier can detect liver area quickly without errors and provides sufficient liver shape information for model initialization. The AdaBoost profile classifier achieves sufficient accuracy and greatly decreases segmentation time. These results show that the proposed segmentation method achieves a level of accuracy comparable to that of state-of-the-art automatic methods based on ASM.« less
Fast automatic 3D liver segmentation based on a three-level AdaBoost-guided active shape model.
He, Baochun; Huang, Cheng; Sharp, Gregory; Zhou, Shoujun; Hu, Qingmao; Fang, Chihua; Fan, Yingfang; Jia, Fucang
2016-05-01
A robust, automatic, and rapid method for liver delineation is urgently needed for the diagnosis and treatment of liver disorders. Until now, the high variability in liver shape, local image artifacts, and the presence of tumors have complicated the development of automatic 3D liver segmentation. In this study, an automatic three-level AdaBoost-guided active shape model (ASM) is proposed for the segmentation of the liver based on enhanced computed tomography images in a robust and fast manner, with an emphasis on the detection of tumors. The AdaBoost voxel classifier and AdaBoost profile classifier were used to automatically guide three-level active shape modeling. In the first level of model initialization, fast automatic liver segmentation by an AdaBoost voxel classifier method is proposed. A shape model is then initialized by registration with the resulting rough segmentation. In the second level of active shape model fitting, a prior model based on the two-class AdaBoost profile classifier is proposed to identify the optimal surface. In the third level, a deformable simplex mesh with profile probability and curvature constraint as the external force is used to refine the shape fitting result. In total, three registration methods-3D similarity registration, probability atlas B-spline, and their proposed deformable closest point registration-are used to establish shape correspondence. The proposed method was evaluated using three public challenge datasets: 3Dircadb1, SLIVER07, and Visceral Anatomy3. The results showed that our approach performs with promising efficiency, with an average of 35 s, and accuracy, with an average Dice similarity coefficient (DSC) of 0.94 ± 0.02, 0.96 ± 0.01, and 0.94 ± 0.02 for the 3Dircadb1, SLIVER07, and Anatomy3 training datasets, respectively. The DSC of the SLIVER07 testing and Anatomy3 unseen testing datasets were 0.964 and 0.933, respectively. The proposed automatic approach achieves robust, accurate, and fast liver segmentation for 3D CTce datasets. The AdaBoost voxel classifier can detect liver area quickly without errors and provides sufficient liver shape information for model initialization. The AdaBoost profile classifier achieves sufficient accuracy and greatly decreases segmentation time. These results show that the proposed segmentation method achieves a level of accuracy comparable to that of state-of-the-art automatic methods based on ASM.
Computerized tongue image segmentation via the double geo-vector flow
2014-01-01
Background Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Methods Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. Results The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. Conclusions By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation. PMID:24507094
Computerized tongue image segmentation via the double geo-vector flow.
Shi, Miao-Jing; Li, Guo-Zheng; Li, Fu-Feng; Xu, Chao
2014-02-08
Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation.
Survey of contemporary trends in color image segmentation
NASA Astrophysics Data System (ADS)
Vantaram, Sreenath Rao; Saber, Eli
2012-10-01
In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.
Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon
2018-03-01
Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis. A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide. Copyright © 2017 Elsevier B.V. All rights reserved.
Image fusion using sparse overcomplete feature dictionaries
Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt
2015-10-06
Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.
Unsupervised tattoo segmentation combining bottom-up and top-down cues
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen
2011-06-01
Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.
Rhythmic Extended Kalman Filter for Gait Rehabilitation Motion Estimation and Segmentation.
Joukov, Vladimir; Bonnet, Vincent; Karg, Michelle; Venture, Gentiane; Kulic, Dana
2018-02-01
This paper proposes a method to enable the use of non-intrusive, small, wearable, and wireless sensors to estimate the pose of the lower body during gait and other periodic motions and to extract objective performance measures useful for physiotherapy. The Rhythmic Extended Kalman Filter (Rhythmic-EKF) algorithm is developed to estimate the pose, learn an individualized model of periodic movement over time, and use the learned model to improve pose estimation. The proposed approach learns a canonical dynamical system model of the movement during online observation, which is used to accurately model the acceleration during pose estimation. The canonical dynamical system models the motion as a periodic signal. The estimated phase and frequency of the motion also allow the proposed approach to segment the motion into repetitions and extract useful features, such as gait symmetry, step length, and mean joint movement and variance. The algorithm is shown to outperform the extended Kalman filter in simulation, on healthy participant data, and stroke patient data. For the healthy participant marching dataset, the Rhythmic-EKF improves joint acceleration and velocity estimates over regular EKF by 40% and 37%, respectively, estimates joint angles with 2.4° root mean squared error, and segments the motion into repetitions with 96% accuracy.
Zhang, Zhiqing; Kuzmin, Nikolay V; Groot, Marie Louise; de Munck, Jan C
2017-06-01
The morphologies contained in 3D third harmonic generation (THG) images of human brain tissue can report on the pathological state of the tissue. However, the complexity of THG brain images makes the usage of modern image processing tools, especially those of image filtering, segmentation and validation, to extract this information challenging. We developed a salient edge-enhancing model of anisotropic diffusion for image filtering, based on higher order statistics. We split the intrinsic 3-phase segmentation problem into two 2-phase segmentation problems, each of which we solved with a dedicated model, active contour weighted by prior extreme. We applied the novel proposed algorithms to THG images of structurally normal ex-vivo human brain tissue, revealing key tissue components-brain cells, microvessels and neuropil, enabling statistical characterization of these components. Comprehensive comparison to manually delineated ground truth validated the proposed algorithms. Quantitative comparison to second harmonic generation/auto-fluorescence images, acquired simultaneously from the same tissue area, confirmed the correctness of the main THG features detected. The software and test datasets are available from the authors. z.zhang@vu.nl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System
NASA Astrophysics Data System (ADS)
Chan, T. O.; Lichti, D. D.; Belton, D.
2013-10-01
At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.
Image segmentation-based robust feature extraction for color image watermarking
NASA Astrophysics Data System (ADS)
Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen
2018-04-01
This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.
A Sea-Sky Line Detection Method for Unmanned Surface Vehicles Based on Gradient Saliency.
Wang, Bo; Su, Yumin; Wan, Lei
2016-04-15
Special features in real marine environments such as cloud clutter, sea glint and weather conditions always result in various kinds of interference in optical images, which make it very difficult for unmanned surface vehicles (USVs) to detect the sea-sky line (SSL) accurately. To solve this problem a saliency-based SSL detection method is proposed. Through the computation of gradient saliency the line features of SSL are enhanced effectively, while other interference factors are relatively suppressed, and line support regions are obtained by a region growing method on gradient orientation. The SSL identification is achieved according to region contrast, line segment length and orientation features, and optimal state estimation of SSL detection is implemented by introducing a cubature Kalman filter (CKF). In the end, the proposed method is tested on a benchmark dataset from the "XL" USV in a real marine environment, and the experimental results demonstrate that the proposed method is significantly superior to other state-of-the-art methods in terms of accuracy rate and real-time performance, and its accuracy and stability are effectively improved by the CKF.
Revamped Website Features Easier Access to Travel Survey Data, Offers New
Datasets | News | NREL Revamped Website Features Easier Access to Travel Survey Data, Offers New Datasets Revamped Website Features Easier Access to Travel Survey Data, Offers New Datasets table. Each survey or study now has its own page, allowing users to bookmark it or provide a link to
Efficient terrestrial laser scan segmentation exploiting data structure
NASA Astrophysics Data System (ADS)
Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa
2016-09-01
New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.
Automatic extraction of planetary image features
NASA Technical Reports Server (NTRS)
LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)
2013-01-01
A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.
The Fish Consumption Advisories dataset contains information on Fish Advisory events that have been indexed to the EPA Office of Water NHDPlus v2.1 hydrology and stored in the Reach Addressing Database (RAD). NHDPlus is a database that interconnects and uniquely identifies the millions of stream segments or reaches that comprise the Nations' surface water drainage system. NHDPlus provides a national framework for assigning reach addresses to water quality related entities, such as fish advisories locations. Reach addresses establish the locations of these entities relative to one another within the NHD surface water drainage network in a manner similar to street addresses. The assignment of reach addresses is accomplished through a process known as reach indexing. Fish consumption advisories and fish tissue sampling stations are reported to EPA by the states. Sampling stations are the locations where a state has collected fish tissue data for use in advisory determinations. Fish consumption advisory locations are coded onto NHDPlus flowline features to create point and linear events. Fish consumption advisory locations are also coded onto NHDPlus waterbody features to create area events. In addition to NHDPlus-reach indexed data, there may also be custom events (point, line, or area) that are not associated with NHDPlus. Although these Fish consumption advisories are not represented in NHDPlus, the data created for them are in an EPA standard format that is co
Extracting contours of oval-shaped objects by Hough transform and minimal path algorithms
NASA Astrophysics Data System (ADS)
Tleis, Mohamed; Verbeek, Fons J.
2014-04-01
Circular and oval-like objects are very common in cell and micro biology. These objects need to be analyzed, and to that end, digitized images from the microscope are used so as to come to an automated analysis pipeline. It is essential to detect all the objects in an image as well as to extract the exact contour of each individual object. In this manner it becomes possible to perform measurements on these objects, i.e. shape and texture features. Our measurement objective is achieved by probing contour detection through dynamic programming. In this paper we describe a method that uses Hough transform and two minimal path algorithms to detect contours of (ovoid-like) objects. These algorithms are based on an existing grey-weighted distance transform and a new algorithm to extract the circular shortest path in an image. The methods are tested on an artificial dataset of a 1000 images, with an F1-score of 0.972. In a case study with yeast cells, contours from our methods were compared with another solution using Pratt's figure of merit. Results indicate that our methods were more precise based on a comparison with a ground-truth dataset. As far as yeast cells are concerned, the segmentation and measurement results enable, in future work, to retrieve information from different developmental stages of the cell using complex features.
Hanaoka, Shouhei; Masutani, Yoshitaka; Nemoto, Mitsutaka; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Ohtomo, Kuni; Shimizu, Akinobu
2017-03-01
A fully automatic multiatlas-based method for segmentation of the spine and pelvis in a torso CT volume is proposed. A novel landmark-guided diffeomorphic demons algorithm is used to register a given CT image to multiple atlas volumes. This algorithm can utilize both grayscale image information and given landmark coordinate information optimally. The segmentation has four steps. Firstly, 170 bony landmarks are detected in the given volume. Using these landmark positions, an atlas selection procedure is performed to reduce the computational cost of the following registration. Then the chosen atlas volumes are registered to the given CT image. Finally, voxelwise label voting is performed to determine the final segmentation result. The proposed method was evaluated using 50 torso CT datasets as well as the public SpineWeb dataset. As a result, a mean distance error of [Formula: see text] and a mean Dice coefficient of [Formula: see text] were achieved for the whole spine and the pelvic bones, which are competitive with other state-of-the-art methods. From the experimental results, the usefulness of the proposed segmentation method was validated.
Segmentation of the Himalayas as revealed by arc-parallel gravity anomalies.
Hetényi, György; Cattin, Rodolphe; Berthet, Théo; Le Moigne, Nicolas; Chophel, Jamyang; Lechmann, Sarah; Hammer, Paul; Drukpa, Dowchu; Sapkota, Soma Nath; Gautier, Stéphanie; Thinley, Kinzang
2016-09-21
Lateral variations along the Himalayan arc are suggested by an increasing number of studies and carry important information about the orogen's segmentation. Here we compile the hitherto most complete land gravity dataset in the region which enables the currently highest resolution plausible analysis. To study lateral variations in collisional structure we compute arc-parallel gravity anomalies (APaGA) by subtracting the average arc-perpendicular profile from our dataset; we compute likewise for topography (APaTA). We find no direct correlation between APaGA, APaTA and background seismicity, as suggested in oceanic subduction context. In the Himalayas APaTA mainly reflect relief and erosional effects, whereas APaGA reflect the deep structure of the orogen with clear lateral boundaries. Four segments are outlined and have disparate flexural geometry: NE India, Bhutan, Nepal &India until Dehradun, and NW India. The segment boundaries in the India plate are related to inherited structures, and the boundaries of the Shillong block are highlighted by seismic activity. We find that large earthquakes of the past millennium do not propagate across the segment boundaries defined by APaGA, therefore these seem to set limits for potential rupture of megathrust earthquakes.
Segmentation of the Himalayas as revealed by arc-parallel gravity anomalies
Hetényi, György; Cattin, Rodolphe; Berthet, Théo; Le Moigne, Nicolas; Chophel, Jamyang; Lechmann, Sarah; Hammer, Paul; Drukpa, Dowchu; Sapkota, Soma Nath; Gautier, Stéphanie; Thinley, Kinzang
2016-01-01
Lateral variations along the Himalayan arc are suggested by an increasing number of studies and carry important information about the orogen’s segmentation. Here we compile the hitherto most complete land gravity dataset in the region which enables the currently highest resolution plausible analysis. To study lateral variations in collisional structure we compute arc-parallel gravity anomalies (APaGA) by subtracting the average arc-perpendicular profile from our dataset; we compute likewise for topography (APaTA). We find no direct correlation between APaGA, APaTA and background seismicity, as suggested in oceanic subduction context. In the Himalayas APaTA mainly reflect relief and erosional effects, whereas APaGA reflect the deep structure of the orogen with clear lateral boundaries. Four segments are outlined and have disparate flexural geometry: NE India, Bhutan, Nepal & India until Dehradun, and NW India. The segment boundaries in the India plate are related to inherited structures, and the boundaries of the Shillong block are highlighted by seismic activity. We find that large earthquakes of the past millennium do not propagate across the segment boundaries defined by APaGA, therefore these seem to set limits for potential rupture of megathrust earthquakes. PMID:27649782
NASA Astrophysics Data System (ADS)
Villafruela, Javier; Crites, Sebastian; Cheng, Bastian; Knaack, Christian; Thomalla, Götz; Menon, Bijoy K.; Forkert, Nils D.
2017-03-01
Stroke is a leading cause of death and disability in the western hemisphere. Acute ischemic strokes can be broadly classified based on the underlying cause into atherosclerotic strokes, cardioembolic strokes, small vessels disease, and stroke with other causes. The ability to determine the exact origin of an acute ischemic stroke is highly relevant for optimal treatment decision and preventing recurrent events. However, the differentiation of atherosclerotic and cardioembolic phenotypes can be especially challenging due to similar appearance and symptoms. The aim of this study was to develop and evaluate the feasibility of an image-based machine learning approach for discriminating between arteriosclerotic and cardioembolic acute ischemic strokes using 56 apparent diffusion coefficient (ADC) datasets from acute stroke patients. For this purpose, acute infarct lesions were semi-atomically segmented and 30,981 geometric and texture image features were extracted for each stroke volume. To improve the performance and accuracy, categorical Pearson's χ2 test was used to select the most informative features while removing redundant attributes. As a result, only 289 features were finally included for training of a deep multilayer feed-forward neural network without bootstrapping. The proposed method was evaluated using a leave-one-out cross validation scheme. The proposed classification method achieved an average area under receiver operator characteristic curve value of 0.93 and a classification accuracy of 94.64%. These first results suggest that the proposed image-based classification framework can support neurologists in clinical routine differentiating between atherosclerotic and cardioembolic phenotypes.
Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; Abdul Aziz, Yang Faridah; Chee, Kok Han; McLaughlin, Robert A
2018-06-01
In this paper, we develop and validate an open source, fully automatic algorithm to localize the left ventricular (LV) blood pool centroid in short axis cardiac cine MR images, enabling follow-on automated LV segmentation algorithms. The algorithm comprises four steps: (i) quantify motion to determine an initial region of interest surrounding the heart, (ii) identify potential 2D objects of interest using an intensity-based segmentation, (iii) assess contraction/expansion, circularity, and proximity to lung tissue to score all objects of interest in terms of their likelihood of constituting part of the LV, and (iv) aggregate the objects into connected groups and construct the final LV blood pool volume and centroid. This algorithm was tested against 1140 datasets from the Kaggle Second Annual Data Science Bowl, as well as 45 datasets from the STACOM 2009 Cardiac MR Left Ventricle Segmentation Challenge. Correct LV localization was confirmed in 97.3% of the datasets. The mean absolute error between the gold standard and localization centroids was 2.8 to 4.7 mm, or 12 to 22% of the average endocardial radius. Graphical abstract Fully automated localization of the left ventricular blood pool in short axis cardiac cine MR images.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan
2016-03-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.
Cellular image segmentation using n-agent cooperative game theory
NASA Astrophysics Data System (ADS)
Dimock, Ian B.; Wan, Justin W. L.
2016-03-01
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
Padma, A; Sukanesh, R
2013-01-01
A computer software system is designed for the segmentation and classification of benign from malignant tumour slices in brain computed tomography (CT) images. This paper presents a method to find and select both the dominant run length and co-occurrence texture features of region of interest (ROI) of the tumour region of each slice to be segmented by Fuzzy c means clustering (FCM) and evaluate the performance of support vector machine (SVM)-based classifiers in classifying benign and malignant tumour slices. Two hundred and six tumour confirmed CT slices are considered in this study. A total of 17 texture features are extracted by a feature extraction procedure, and six features are selected using Principal Component Analysis (PCA). This study constructed the SVM-based classifier with the selected features and by comparing the segmentation results with the experienced radiologist labelled ground truth (target). Quantitative analysis between ground truth and segmented tumour is presented in terms of segmentation accuracy, segmentation error and overlap similarity measures such as the Jaccard index. The classification performance of the SVM-based classifier with the same selected features is also evaluated using a 10-fold cross-validation method. The proposed system provides some newly found texture features have an important contribution in classifying benign and malignant tumour slices efficiently and accurately with less computational time. The experimental results showed that the proposed system is able to achieve the highest segmentation and classification accuracy effectiveness as measured by jaccard index and sensitivity and specificity.
Nayak, Deepak Ranjan; Dash, Ratnakar; Majhi, Banshidhar
2017-01-01
This paper presents an automatic classification system for segregating pathological brain from normal brains in magnetic resonance imaging scanning. The proposed system employs contrast limited adaptive histogram equalization scheme to enhance the diseased region in brain MR images. Two-dimensional stationary wavelet transform is harnessed to extract features from the preprocessed images. The feature vector is constructed using the energy and entropy values, computed from the level- 2 SWT coefficients. Then, the relevant and uncorrelated features are selected using symmetric uncertainty ranking filter. Subsequently, the selected features are given input to the proposed AdaBoost with support vector machine classifier, where SVM is used as the base classifier of AdaBoost algorithm. To validate the proposed system, three standard MR image datasets, Dataset-66, Dataset-160, and Dataset- 255 have been utilized. The 5 runs of k-fold stratified cross validation results indicate the suggested scheme offers better performance than other existing schemes in terms of accuracy and number of features. The proposed system earns ideal classification over Dataset-66 and Dataset-160; whereas, for Dataset- 255, an accuracy of 99.45% is achieved. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Joint Labeling Of Multiple Regions of Interest (Rois) By Enhanced Auto Context Models.
Kim, Minjeong; Wu, Guorong; Guo, Yanrong; Shen, Dinggang
2015-04-01
Accurate segmentation of a set of regions of interest (ROIs) in the brain images is a key step in many neuroscience studies. Due to the complexity of image patterns, many learning-based segmentation methods have been proposed, including auto context model (ACM) that can capture high-level contextual information for guiding segmentation. However, since current ACM can only handle one ROI at a time, neighboring ROIs have to be labeled separately with different ACMs that are trained independently without communicating each other. To address this, we enhance the current single-ROI learning ACM to multi-ROI learning ACM for joint labeling of multiple neighboring ROIs (called e ACM). First, we extend current independently-trained single-ROI ACMs to a set of jointly-trained cross-ROI ACMs, by simultaneous training of ACMs for all spatially-connected ROIs to let them to share their respective intermediate outputs for coordinated labeling of each image point. Then, the context features in each ACM can capture the cross-ROI dependence information from the outputs of other ACMs that are designed for neighboring ROIs. Second, we upgrade the output labeling map of each ACM with the multi-scale representation, thus both local and global context information can be effectively used to increase the robustness in characterizing geometric relationship among neighboring ROIs. Third, we integrate ACM into a multi-atlases segmentation paradigm, for encompassing high variations among subjects. Experiments on LONI LPBA40 dataset show much better performance by our e ACM, compared to the conventional ACM.
Nie, Zhi; Vairavan, Srinivasan; Narayan, Vaibhav A; Ye, Jieping; Li, Qingqin S
2018-01-01
Identification of risk factors of treatment resistance may be useful to guide treatment selection, avoid inefficient trial-and-error, and improve major depressive disorder (MDD) care. We extended the work in predictive modeling of treatment resistant depression (TRD) via partition of the data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) cohort into a training and a testing dataset. We also included data from a small yet completely independent cohort RIS-INT-93 as an external test dataset. We used features from enrollment and level 1 treatment (up to week 2 response only) of STAR*D to explore the feature space comprehensively and applied machine learning methods to model TRD outcome at level 2. For TRD defined using QIDS-C16 remission criteria, multiple machine learning models were internally cross-validated in the STAR*D training dataset and externally validated in both the STAR*D testing dataset and RIS-INT-93 independent dataset with an area under the receiver operating characteristic curve (AUC) of 0.70-0.78 and 0.72-0.77, respectively. The upper bound for the AUC achievable with the full set of features could be as high as 0.78 in the STAR*D testing dataset. Model developed using top 30 features identified using feature selection technique (k-means clustering followed by χ2 test) achieved an AUC of 0.77 in the STAR*D testing dataset. In addition, the model developed using overlapping features between STAR*D and RIS-INT-93, achieved an AUC of > 0.70 in both the STAR*D testing and RIS-INT-93 datasets. Among all the features explored in STAR*D and RIS-INT-93 datasets, the most important feature was early or initial treatment response or symptom severity at week 2. These results indicate that prediction of TRD prior to undergoing a second round of antidepressant treatment could be feasible even in the absence of biomarker data.
Enhancement of snow cover change detection with sparse representation and dictionary learning
NASA Astrophysics Data System (ADS)
Varade, D.; Dikshit, O.
2014-11-01
Sparse representation and decoding is often used for denoising images and compression of images with respect to inherent features. In this paper, we adopt a methodology incorporating sparse representation of a snow cover change map using the K-SVD trained dictionary and sparse decoding to enhance the change map. The pixels often falsely characterized as "changes" are eliminated using this approach. The preliminary change map was generated using differenced NDSI or S3 maps in case of Resourcesat-2 and Landsat 8 OLI imagery respectively. These maps are extracted into patches for compressed sensing using Discrete Cosine Transform (DCT) to generate an initial dictionary which is trained by the K-SVD approach. The trained dictionary is used for sparse coding of the change map using the Orthogonal Matching Pursuit (OMP) algorithm. The reconstructed change map incorporates a greater degree of smoothing and represents the features (snow cover changes) with better accuracy. The enhanced change map is segmented using kmeans to discriminate between the changed and non-changed pixels. The segmented enhanced change map is compared, firstly with the difference of Support Vector Machine (SVM) classified NDSI maps and secondly with a reference data generated as a mask by visual interpretation of the two input images. The methodology is evaluated using multi-spectral datasets from Resourcesat-2 and Landsat-8. The k-hat statistic is computed to determine the accuracy of the proposed approach.
Mastmeyer, André; Engelke, Klaus; Fuchs, Christina; Kalender, Willi A
2006-08-01
We have developed a new hierarchical 3D technique to segment the vertebral bodies in order to measure bone mineral density (BMD) with high trueness and precision in volumetric CT datasets. The hierarchical approach starts with a coarse separation of the individual vertebrae, applies a variety of techniques to segment the vertebral bodies with increasing detail and ends with the definition of an anatomic coordinate system for each vertebral body, relative to which up to 41 trabecular and cortical volumes of interest are positioned. In a pre-segmentation step constraints consisting of Boolean combinations of simple geometric shapes are determined that enclose each individual vertebral body. Bound by these constraints viscous deformable models are used to segment the main shape of the vertebral bodies. Volume growing and morphological operations then capture the fine details of the bone-soft tissue interface. In the volumes of interest bone mineral density and content are determined. In addition, in the segmented vertebral bodies geometric parameters such as volume or the length of the main axes of inertia can be measured. Intra- and inter-operator precision errors of the segmentation procedure were analyzed using existing clinical patient datasets. Results for segmented volume, BMD, and coordinate system position were below 2.0%, 0.6%, and 0.7%, respectively. Trueness was analyzed using phantom scans. The bias of the segmented volume was below 4%; for BMD it was below 1.5%. The long-term goal of this work is improved fracture prediction and patient monitoring in the field of osteoporosis. A true 3D segmentation also enables an accurate measurement of geometrical parameters that may augment the clinical value of a pure BMD analysis.
Isomap transform for segmenting human body shapes.
Cerveri, P; Sarro, K J; Marchente, M; Barros, R M L
2011-09-01
Segmentation of the 3D human body is a very challenging problem in applications exploiting volume capture data. Direct clustering in the Euclidean space is usually complex or even unsolvable. This paper presents an original method based on the Isomap (isometric feature mapping) transform of the volume data-set. The 3D articulated posture is mapped by Isomap in the pose of Da Vinci's Vitruvian man. The limbs are unrolled from each other and separated from the trunk and pelvis, and the topology of the human body shape is recovered. In such a configuration, Hoshen-Kopelman clustering applied to concentric spherical shells is used to automatically group points into the labelled principal curves. Shepard interpolation is utilised to back-map points of the principal curves into the original volume space. The experimental results performed on many different postures have proved the validity of the proposed method. Reliability of less than 2 cm and 3° in the location of the joint centres and direction axes of rotations has been obtained, respectively, which qualifies this procedure as a potential tool for markerless motion analysis.
Multi-level machine learning prediction of protein-protein interactions in Saccharomyces cerevisiae.
Zubek, Julian; Tatjewski, Marcin; Boniecki, Adam; Mnich, Maciej; Basu, Subhadip; Plewczynski, Dariusz
2015-01-01
Accurate identification of protein-protein interactions (PPI) is the key step in understanding proteins' biological functions, which are typically context-dependent. Many existing PPI predictors rely on aggregated features from protein sequences, however only a few methods exploit local information about specific residue contacts. In this work we present a two-stage machine learning approach for prediction of protein-protein interactions. We start with the carefully filtered data on protein complexes available for Saccharomyces cerevisiae in the Protein Data Bank (PDB) database. First, we build linear descriptions of interacting and non-interacting sequence segment pairs based on their inter-residue distances. Secondly, we train machine learning classifiers to predict binary segment interactions for any two short sequence fragments. The final prediction of the protein-protein interaction is done using the 2D matrix representation of all-against-all possible interacting sequence segments of both analysed proteins. The level-I predictor achieves 0.88 AUC for micro-scale, i.e., residue-level prediction. The level-II predictor improves the results further by a more complex learning paradigm. We perform 30-fold macro-scale, i.e., protein-level cross-validation experiment. The level-II predictor using PSIPRED-predicted secondary structure reaches 0.70 precision, 0.68 recall, and 0.70 AUC, whereas other popular methods provide results below 0.6 threshold (recall, precision, AUC). Our results demonstrate that multi-scale sequence features aggregation procedure is able to improve the machine learning results by more than 10% as compared to other sequence representations. Prepared datasets and source code for our experimental pipeline are freely available for download from: http://zubekj.github.io/mlppi/ (open source Python implementation, OS independent).
Machine Identification of Martian Craters Using Digital Elevation Data
NASA Astrophysics Data System (ADS)
Bue, B.; Stepinski, T. F.
2005-12-01
Impact craters are among the most studied features on Martian surface. Their importance stems from the worth of information that a detailed analysis of their number and morphology can bring forth. Because building manually a comprehensive dataset of craters is a laborious process, there have been many previous attempts to develop an automatic, image-based crater identifier. The resulting identifiers suffer from low efficiency and remain in an experimental stage. We have developed a DEM-based, fully autonomous crater identifier that takes an arbitrarily large Martian site as an input and produces a catalog of craters as an output. Using the topography data we calculate a topographic profile curvature that is thresholded to produce a binary image, pixels having maximum negative curvature are labeled black, the remaining pixels are labeled white. The black pixels outline craters because crater rims are the most convex feature in the Martian landscape. The Hough Transform (HT) is used for an actual recognition of craters in the binary image. The image is first segmented (without cutting the craters) into a large number of smaller images using the ``flood" algorithm that identifies basins. This segmentation makes possible the application of highly inefficient HT to large sites. The identifier is applied to a 106 km2 site located around the Herschel crater. According to the Barlow catalog, this site contains 485 craters >5 km. Our identifier finds 1099 segments, 628 of them are classified as craters >5 km. Overall, there is an excellent agreement between the two catalogs, although the specific statistics are still pending due to the difficulties in recalculating the MDIM 1 coordinate system used in the Barlow catalog to the MDIM 2.1 coordinate system used by our identifier.
Oregon Cascades Play Fairway Analysis: Raster Datasets and Models
Adam Brandt
2015-11-15
This submission includes maps of the spatial distribution of basaltic, and felsic rocks in the Oregon Cascades. It also includes a final Play Fairway Analysis (PFA) model, with the heat and permeability composite risk segments (CRS) supplied separately. Metadata for each raster dataset can be found within the zip files, in the TIF images
Bayesian Fusion of Color and Texture Segmentations
NASA Technical Reports Server (NTRS)
Manduchi, Roberto
2000-01-01
In many applications one would like to use information from both color and texture features in order to segment an image. We propose a novel technique to combine "soft" segmentations computed for two or more features independently. Our algorithm merges models according to a mean entropy criterion, and allows to choose the appropriate number of classes for the final grouping. This technique also allows to improve the quality of supervised classification based on one feature (e.g. color) by merging information from unsupervised segmentation based on another feature (e.g., texture.)
Integration of heterogeneous features for remote sensing scene classification
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang
2018-01-01
Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.
Souza, Roberto; Lucena, Oeslle; Garrafa, Julia; Gobbi, David; Saluzzi, Marina; Appenzeller, Simone; Rittner, Letícia; Frayne, Richard; Lotufo, Roberto
2018-04-15
This paper presents an open, multi-vendor, multi-field strength magnetic resonance (MR) T1-weighted volumetric brain imaging dataset, named Calgary-Campinas-359 (CC-359). The dataset is composed of images of older healthy adults (29-80 years) acquired on scanners from three vendors (Siemens, Philips and General Electric) at both 1.5 T and 3 T. CC-359 is comprised of 359 datasets, approximately 60 subjects per vendor and magnetic field strength. The dataset is approximately age and gender balanced, subject to the constraints of the available images. It provides consensus brain extraction masks for all volumes generated using supervised classification. Manual segmentation results for twelve randomly selected subjects performed by an expert are also provided. The CC-359 dataset allows investigation of 1) the influences of both vendor and magnetic field strength on quantitative analysis of brain MR; 2) parameter optimization for automatic segmentation methods; and potentially 3) machine learning classifiers with big data, specifically those based on deep learning methods, as these approaches require a large amount of data. To illustrate the utility of this dataset, we compared to the results of a supervised classifier, the results of eight publicly available skull stripping methods and one publicly available consensus algorithm. A linear mixed effects model analysis indicated that vendor (p-value<0.001) and magnetic field strength (p-value<0.001) have statistically significant impacts on skull stripping results. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Jing; Xie, Weixin; Pei, Jihong
2018-03-01
Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.
Metric Learning for Hyperspectral Image Segmentation
NASA Technical Reports Server (NTRS)
Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca
2011-01-01
We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.
Adaptive segmentation of cerebrovascular tree in time-of-flight magnetic resonance angiography.
Hao, J T; Li, M L; Tang, F L
2008-01-01
Accurate segmentation of the human vasculature is an important prerequisite for a number of clinical procedures, such as diagnosis, image-guided neurosurgery and pre-surgical planning. In this paper, an improved statistical approach to extracting whole cerebrovascular tree in time-of-flight magnetic resonance angiography is proposed. Firstly, in order to get a more accurate segmentation result, a localized observation model is proposed instead of defining the observation model over the entire dataset. Secondly, for the binary segmentation, an improved Iterative Conditional Model (ICM) algorithm is presented to accelerate the segmentation process. The experimental results showed that the proposed algorithm can obtain more satisfactory segmentation results and save more processing time than conventional approaches, simultaneously.
Video segmentation using keywords
NASA Astrophysics Data System (ADS)
Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet
2018-04-01
At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.
Optimal retinal cyst segmentation from OCT images
NASA Astrophysics Data System (ADS)
Oguz, Ipek; Zhang, Li; Abramoff, Michael D.; Sonka, Milan
2016-03-01
Accurate and reproducible segmentation of cysts and fluid-filled regions from retinal OCT images is an important step allowing quantification of the disease status, longitudinal disease progression, and response to therapy in wet-pathology retinal diseases. However, segmentation of fluid-filled regions from OCT images is a challenging task due to their inhomogeneous appearance, the unpredictability of their number, size and location, as well as the intensity profile similarity between such regions and certain healthy tissue types. While machine learning techniques can be beneficial for this task, they require large training datasets and are often over-fitted to the appearance models of specific scanner vendors. We propose a knowledge-based approach that leverages a carefully designed cost function and graph-based segmentation techniques to provide a vendor-independent solution to this problem. We illustrate the results of this approach on two publicly available datasets with a variety of scanner vendors and retinal disease status. Compared to a previous machine-learning based approach, the volume similarity error was dramatically reduced from 81:3+/-56:4% to 22:2+/-21:3% (paired t-test, p << 0:001).
Nucleus segmentation in histology images with hierarchical multilevel thresholding
NASA Astrophysics Data System (ADS)
Ahmady Phoulady, Hady; Goldgof, Dmitry B.; Hall, Lawrence O.; Mouton, Peter R.
2016-03-01
Automatic segmentation of histological images is an important step for increasing throughput while maintaining high accuracy, avoiding variation from subjective bias, and reducing the costs for diagnosing human illnesses such as cancer and Alzheimer's disease. In this paper, we present a novel method for unsupervised segmentation of cell nuclei in stained histology tissue. Following an initial preprocessing step involving color deconvolution and image reconstruction, the segmentation step consists of multilevel thresholding and a series of morphological operations. The only parameter required for the method is the minimum region size, which is set according to the resolution of the image. Hence, the proposed method requires no training sets or parameter learning. Because the algorithm requires no assumptions or a priori information with regard to cell morphology, the automatic approach is generalizable across a wide range of tissues. Evaluation across a dataset consisting of diverse tissues, including breast, liver, gastric mucosa and bone marrow, shows superior performance over four other recent methods on the same dataset in terms of F-measure with precision and recall of 0.929 and 0.886, respectively.
NASA Astrophysics Data System (ADS)
Ge, Xuming
2017-08-01
The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.
NASA Astrophysics Data System (ADS)
Ahmed, S.; Iftekharuddin, K. M.; Ogg, R. J.; Laningham, F. H.
2009-02-01
Our previous works suggest that fractal-based texture features are very useful for detection, segmentation and classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. In this work, we investigate and compare efficacy of our texture features such as fractal and multifractional Brownian motion (mBm), and intensity along with another useful level-set based shape feature in PF tumor segmentation. We study feature selection and ranking using Kullback -Leibler Divergence (KLD) and subsequent tumor segmentation; all in an integrated Expectation Maximization (EM) framework. We study the efficacy of all four features in both multimodality as well as disparate MRI modalities such as T1, T2 and FLAIR. Both KLD feature plots and information theoretic entropy measure suggest that mBm feature offers the maximum separation between tumor and non-tumor tissues in T1 and FLAIR MRI modalities. The same metrics show that intensity feature offers the maximum separation between tumor and non-tumor tissue in T2 MRI modality. The efficacies of these features are further validated in segmenting PF tumor using both single modality and multimodality MRI for six pediatric patients with over 520 real MR images.
Learning from Demonstration: Generalization via Task Segmentation
NASA Astrophysics Data System (ADS)
Ettehadi, N.; Manaffam, S.; Behal, A.
2017-10-01
In this paper, a motion segmentation algorithm design is presented with the goal of segmenting a learned trajectory from demonstration such that each segment is locally maximally different from its neighbors. This segmentation is then exploited to appropriately scale (dilate/squeeze and/or rotate) a nominal trajectory learned from a few demonstrations on a fixed experimental setup such that it is applicable to different experimental settings without expanding the dataset and/or retraining the robot. The algorithm is computationally efficient in the sense that it allows facile transition between different environments. Experimental results using the Baxter robotic platform showcase the ability of the algorithm to accurately transfer a feeding task.
Remote, non-contacting personnel bio-identification using microwave radiation
NASA Technical Reports Server (NTRS)
McGrath, William R. (Inventor); Talukder, Ashit (Inventor)
2011-01-01
A system to remotely identify a person by utilizing a microwave cardiogram, where some embodiments segment a signal representing cardiac beats into segments, extract features from the segments, and perform pattern identification of the segments and features with a pre-existing data set. Other embodiments are described and claimed.
Automatic Organ Segmentation for CT Scans Based on Super-Pixel and Convolutional Neural Networks.
Liu, Xiaoming; Guo, Shuxu; Yang, Bingtao; Ma, Shuzhi; Zhang, Huimao; Li, Jing; Sun, Changjian; Jin, Lanyi; Li, Xueyan; Yang, Qi; Fu, Yu
2018-04-20
Accurate segmentation of specific organ from computed tomography (CT) scans is a basic and crucial task for accurate diagnosis and treatment. To avoid time-consuming manual optimization and to help physicians distinguish diseases, an automatic organ segmentation framework is presented. The framework utilized convolution neural networks (CNN) to classify pixels. To reduce the redundant inputs, the simple linear iterative clustering (SLIC) of super-pixels and the support vector machine (SVM) classifier are introduced. To establish the perfect boundary of organs in one-pixel-level, the pixels need to be classified step-by-step. First, the SLIC is used to cut an image into grids and extract respective digital signatures. Next, the signature is classified by the SVM, and the rough edges are acquired. Finally, a precise boundary is obtained by the CNN, which is based on patches around each pixel-point. The framework is applied to abdominal CT scans of livers and high-resolution computed tomography (HRCT) scans of lungs. The experimental CT scans are derived from two public datasets (Sliver 07 and a Chinese local dataset). Experimental results show that the proposed method can precisely and efficiently detect the organs. This method consumes 38 s/slice for liver segmentation. The Dice coefficient of the liver segmentation results reaches to 97.43%. For lung segmentation, the Dice coefficient is 97.93%. This finding demonstrates that the proposed framework is a favorable method for lung segmentation of HRCT scans.
Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M
2015-01-01
Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification
Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references. PMID:29581722
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.
Yu, Yunlong; Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.
Hair segmentation using adaptive threshold from edge and branch length measures.
Lee, Ian; Du, Xian; Anthony, Brian
2017-10-01
Non-invasive imaging techniques allow the monitoring of skin structure and diagnosis of skin diseases in clinical applications. However, hair in skin images hampers the imaging and classification of the skin structure of interest. Although many hair segmentation methods have been proposed for digital hair removal, a major challenge in hair segmentation remains in detecting hairs that are thin, overlapping, of similar contrast or color to underlying skin, or overlaid on highly-textured skin structure. To solve the problem, we present an automatic hair segmentation method that uses edge density (ED) and mean branch length (MBL) to measure hair. First, hair is detected by the integration of top-hat transform and modified second-order Gaussian filter. Second, we employ a robust adaptive threshold of ED and MBL to generate a hair mask. Third, the hair mask is refined by k-NN classification of hair and skin pixels. The proposed algorithm was tested using two datasets of healthy skin images and lesion images respectively. These datasets were taken from different imaging platforms in various illumination levels and varying skin colors. We compared the hair detection and segmentation results from our algorithm and six other hair segmentation methods of state of the art. Our method exhibits high value of sensitivity: 75% and specificity: 95%, which indicates significantly higher accuracy and better balance between true positive and false positive detection than the other methods. Published by Elsevier Ltd.
SegAuth: A Segment-based Approach to Behavioral Biometric Authentication
Li, Yanyan; Xie, Mengjun; Bian, Jiang
2016-01-01
Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective—behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user’s distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user’s authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets. PMID:28573214
SegAuth: A Segment-based Approach to Behavioral Biometric Authentication.
Li, Yanyan; Xie, Mengjun; Bian, Jiang
2016-10-01
Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective-behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user's distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user's authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets.
Simultaneous segmentation of the bone and cartilage surfaces of a knee joint in 3D
NASA Astrophysics Data System (ADS)
Yin, Y.; Zhang, X.; Anderson, D. D.; Brown, T. D.; Hofwegen, C. Van; Sonka, M.
2009-02-01
We present a novel framework for the simultaneous segmentation of multiple interacting surfaces belonging to multiple mutually interacting objects. The method is a non-trivial extension of our previously reported optimal multi-surface segmentation. Considering an example application of knee-cartilage segmentation, the framework consists of the following main steps: 1) Shape model construction: Building a mean shape for each bone of the joint (femur, tibia, patella) from interactively segmented volumetric datasets. Using the resulting mean-shape model - identification of cartilage, non-cartilage, and transition areas on the mean-shape bone model surfaces. 2) Presegmentation: Employment of iterative optimal surface detection method to achieve approximate segmentation of individual bone surfaces. 3) Cross-object surface mapping: Detection of inter-bone equidistant separating sheets to help identify corresponding vertex pairs for all interacting surfaces. 4) Multi-object, multi-surface graph construction and final segmentation: Construction of a single multi-bone, multi-surface graph so that two surfaces (bone and cartilage) with zero and non-zero intervening distances can be detected for each bone of the joint, according to whether or not cartilage can be locally absent or present on the bone. To define inter-object relationships, corresponding vertex pairs identified using the separating sheets were interlinked in the graph. The graph optimization algorithm acted on the entire multiobject, multi-surface graph to yield a globally optimal solution. The segmentation framework was tested on 16 MR-DESS knee-joint datasets from the Osteoarthritis Initiative database. The average signed surface positioning error for the 6 detected surfaces ranged from 0.00 to 0.12 mm. When independently initialized, the signed reproducibility error of bone and cartilage segmentation ranged from 0.00 to 0.26 mm. The results showed that this framework provides robust, accurate, and reproducible segmentation of the knee joint bone and cartilage surfaces of the femur, tibia, and patella. As a general segmentation tool, the developed framework can be applied to a broad range of multi-object segmentation problems.
A method of plane geometry primitive presentation
NASA Astrophysics Data System (ADS)
Jiao, Anbo; Luo, Haibo; Chang, Zheng; Hui, Bin
2014-11-01
Point feature and line feature are basic elements in object feature sets, and they play an important role in object matching and recognition. On one hand, point feature is sensitive to noise; on the other hand, there are usually a huge number of point features in an image, which makes it complex for matching. Line feature includes straight line segment and curve. One difficulty in straight line segment matching is the uncertainty of endpoint location, the other is straight line segment fracture problem or short straight line segments joined to form long straight line segment. While for the curve, in addition to the above problems, there is another difficulty in how to quantitatively describe the shape difference between curves. Due to the problems of point feature and line feature, the robustness and accuracy of target description will be affected; in this case, a method of plane geometry primitive presentation is proposed to describe the significant structure of an object. Firstly, two types of primitives are constructed, they are intersecting line primitive and blob primitive. Secondly, a line segment detector (LSD) is applied to detect line segment, and then intersecting line primitive is extracted. Finally, robustness and accuracy of the plane geometry primitive presentation method is studied. This method has a good ability to obtain structural information of the object, even if there is rotation or scale change of the object in the image. Experimental results verify the robustness and accuracy of this method.
DeepNAT: Deep convolutional neural network for segmenting neuroanatomy.
Wachinger, Christian; Reuter, Martin; Klein, Tassilo
2018-04-15
We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
Radiomics-based features for pattern recognition of lung cancer histopathology and metastases.
Ferreira Junior, José Raniery; Koenigkam-Santos, Marcel; Cipriano, Federico Enrique Garcia; Fabro, Alexandre Todorovic; Azevedo-Marques, Paulo Mazzoncini de
2018-06-01
lung cancer is the leading cause of cancer-related deaths in the world, and its poor prognosis varies markedly according to tumor staging. Computed tomography (CT) is the imaging modality of choice for lung cancer evaluation, being used for diagnosis and clinical staging. Besides tumor stage, other features, like histopathological subtype, can also add prognostic information. In this work, radiomics-based CT features were used to predict lung cancer histopathology and metastases using machine learning models. local image datasets of confirmed primary malignant pulmonary tumors were retrospectively evaluated for testing and validation. CT images acquired with same protocol were semiautomatically segmented. Tumors were characterized by clinical features and computer attributes of intensity, histogram, texture, shape, and volume. Three machine learning classifiers used up to 100 selected features to perform the analysis. radiomics-based features yielded areas under the receiver operating characteristic curve of 0.89, 0.97, and 0.92 at testing and 0.75, 0.71, and 0.81 at validation for lymph nodal metastasis, distant metastasis, and histopathology pattern recognition, respectively. the radiomics characterization approach presented great potential to be used in a computational model to aid lung cancer histopathological subtype diagnosis as a "virtual biopsy" and metastatic prediction for therapy decision support without the necessity of a whole-body imaging scanning. Copyright © 2018 Elsevier B.V. All rights reserved.
Automated estimation of leaf distribution for individual trees based on TLS point clouds
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Rutzinger, Martin; Bremer, Magnus
2017-04-01
Light Detection and Ranging (LiDAR) especially the ground based LiDAR (Terrestrial Laser Scanning - TLS) is an operational used and widely available measurement tool supporting forest inventory updating and research in forest ecology. High resolution point clouds from TLS already represent single leaves which can be used for a more precise estimation of Leaf Area Index (LAI) and for higher accurate biomass estimation. However, currently the methodology for extracting single leafs from the unclassified point clouds for individual trees is still missing. The aim of this study is to present a novel segmentation approach in order to extract single leaves and derive features related to leaf morphology (such as area, slope, length and width) of each single leaf from TLS point cloud data. For the study two exemplary single trees were scanned in leaf-on condition on the university campus of Innsbruck during calm wind conditions. A northern red oak (Quercus rubra) was scanned by a discrete return recording Optech ILRIS-3D TLS scanner and a tulip tree (Liliodendron tulpifera) with Riegl VZ-6000 scanner. During the scanning campaign a reference dataset was measured parallel to scanning. In this case 230 leaves were randomly collected around the lower branches of the tree and photos were taken. The developed workflow steps were the following: in the first step normal vectors and eigenvalues were calculated based on the user specified neighborhood. Then using the direction of the largest eigenvalue outliers i.e. ghost points were removed. After that region growing segmentation based on the curvature and angles between normal vectors was applied on the filtered point cloud. On each segment a RANSAC plane fitting algorithm was applied in order to extract the segment based normal vectors. Using the related features of the calculated segments the stem and branches were labeled as non-leaf and other segments were classified as leaf. The validation of the different segmentation parameters was evaluated as the following: i) the sum area of the collected leaves and the point cloud, ii) the segmented leaf length-width ratio iii) the distribution of the leaf area for the segmented and the reference-ones were compared and the ideal parameter-set was found. The results show that the leaves can be captured with the developed workflow and the slope can be determined robustly for the segmented leaves. However, area, length and width values are systematically depending on the angle and the distance from the scanner. For correction of the systematic underestimation, more systematic measurement or LiDAR simulation is required for further detailed analysis. The results of leaf segmentation algorithm show high potential in generating more precise tree models with correctly located leaves in order to extract more precise input model for biological modeling of LAI or atmospheric corrections studies. The presented workflow also can be used in monitoring the change of angle of the leaves due to sun irradiation, water balance, and day-night rhythm.
Zhou, Zhen; Wang, Jian-Bao; Zang, Yu-Feng; Pan, Gang
2018-01-01
Classification approaches have been increasingly applied to differentiate patients and normal controls using resting-state functional magnetic resonance imaging data (RS-fMRI). Although most previous classification studies have reported promising accuracy within individual datasets, achieving high levels of accuracy with multiple datasets remains challenging for two main reasons: high dimensionality, and high variability across subjects. We used two independent RS-fMRI datasets (n = 31, 46, respectively) both with eyes closed (EC) and eyes open (EO) conditions. For each dataset, we first reduced the number of features to a small number of brain regions with paired t-tests, using the amplitude of low frequency fluctuation (ALFF) as a metric. Second, we employed a new method for feature extraction, named the PAIR method, examining EC and EO as paired conditions rather than independent conditions. Specifically, for each dataset, we obtained EC minus EO (EC—EO) maps of ALFF from half of subjects (n = 15 for dataset-1, n = 23 for dataset-2) and obtained EO—EC maps from the other half (n = 16 for dataset-1, n = 23 for dataset-2). A support vector machine (SVM) method was used for classification of EC RS-fMRI mapping and EO mapping. The mean classification accuracy of the PAIR method was 91.40% for dataset-1, and 92.75% for dataset-2 in the conventional frequency band of 0.01–0.08 Hz. For cross-dataset validation, we applied the classifier from dataset-1 directly to dataset-2, and vice versa. The mean accuracy of cross-dataset validation was 94.93% for dataset-1 to dataset-2 and 90.32% for dataset-2 to dataset-1 in the 0.01–0.08 Hz range. For the UNPAIR method, classification accuracy was substantially lower (mean 69.89% for dataset-1 and 82.97% for dataset-2), and was much lower for cross-dataset validation (64.69% for dataset-1 to dataset-2 and 64.98% for dataset-2 to dataset-1) in the 0.01–0.08 Hz range. In conclusion, for within-group design studies (e.g., paired conditions or follow-up studies), we recommend the PAIR method for feature extraction. In addition, dimensionality reduction with strong prior knowledge of specific brain regions should also be considered for feature selection in neuroimaging studies. PMID:29375288
Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.
Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A
2016-04-01
Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.
HCP: A Flexible CNN Framework for Multi-label Image Classification.
Wei, Yunchao; Xia, Wei; Lin, Min; Huang, Junshi; Ni, Bingbing; Dong, Jian; Zhao, Yao; Yan, Shuicheng
2015-10-26
Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and/or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5% by HCP only and 93.2% after the fusion with our complementary result in [44] based on hand-crafted features on the VOC 2012 dataset.
Detection and Monitoring of Oil Spills Using Moderate/High-Resolution Remote Sensing Images.
Li, Ying; Cui, Can; Liu, Zexi; Liu, Bingxin; Xu, Jin; Zhu, Xueyuan; Hou, Yongchao
2017-07-01
Current marine oil spill detection and monitoring methods using high-resolution remote sensing imagery are quite limited. This study presented a new bottom-up and top-down visual saliency model. We used Landsat 8, GF-1, MAMS, HJ-1 oil spill imagery as dataset. A simplified, graph-based visual saliency model was used to extract bottom-up saliency. It could identify the regions with high visual saliency object in the ocean. A spectral similarity match model was used to obtain top-down saliency. It could distinguish oil regions and exclude the other salient interference by spectrums. The regions of interest containing oil spills were integrated using these complementary saliency detection steps. Then, the genetic neural network was used to complete the image classification. These steps increased the speed of analysis. For the test dataset, the average running time of the entire process to detect regions of interest was 204.56 s. During image segmentation, the oil spill was extracted using a genetic neural network. The classification results showed that the method had a low false-alarm rate (high accuracy of 91.42%) and was able to increase the speed of the detection process (fast runtime of 19.88 s). The test image dataset was composed of different types of features over large areas in complicated imaging conditions. The proposed model was proved to be robust in complex sea conditions.
Geiger, Daniel; Bae, Won C.; Statum, Sheronda; Du, Jiang; Chung, Christine B.
2014-01-01
Objective Temporomandibular dysfunction involves osteoarthritis of the TMJ, including degeneration and morphologic changes of the mandibular condyle. Purpose of this study was to determine accuracy of novel 3D-UTE MRI versus micro-CT (μCT) for quantitative evaluation of mandibular condyle morphology. Material & Methods Nine TMJ condyle specimens were harvested from cadavers (2M, 3F; Age 85 ± 10 yrs., mean±SD). 3D-UTE MRI (TR=50ms, TE=0.05 ms, 104 μm isotropic-voxel) was performed using a 3-T MR scanner and μCT (18 μm isotropic-voxel) was performed. MR datasets were spatially-registered with μCT dataset. Two observers segmented bony contours of the condyles. Fibrocartilage was segmented on MR dataset. Using a custom program, bone and fibrocartilage surface coordinates, Gaussian curvature, volume of segmented regions and fibrocartilage thickness were determined for quantitative evaluation of joint morphology. Agreement between techniques (MRI vs. μCT) and observers (MRI vs. MRI) for Gaussian curvature, mean curvature and segmented volume of the bone were determined using intraclass correlation correlation (ICC) analyses. Results Between MRI and μCT, the average deviation of surface coordinates was 0.19±0.15 mm, slightly higher than spatial resolution of MRI. Average deviation of the Gaussian curvature and volume of segmented regions, from MRI to μCT, was 5.7±6.5% and 6.6±6.2%, respectively. ICC coefficients (MRI vs. μCT) for Gaussian curvature, mean curvature and segmented volumes were respectively 0.892, 0.893 and 0.972. Between observers (MRI vs. MRI), the ICC coefficients were 0.998, 0.999 and 0.997 respectively. Fibrocartilage thickness was 0.55±0.11 mm, as previously described in literature for grossly normal TMJ samples. Conclusion 3D-UTE MR quantitative evaluation of TMJ condyle morphology ex-vivo, including surface, curvature and segmented volume, shows high correlation against μCT and between observers. In addition, UTE MRI allows quantitative evaluation of the fibrocartilaginous condylar component. PMID:24092237
Late-summer sea ice segmentation with multi-polarisation SAR features in C and X band
NASA Astrophysics Data System (ADS)
Fors, Ane S.; Brekke, Camilla; Doulgeris, Anthony P.; Eltoft, Torbjørn; Renner, Angelika H. H.; Gerland, Sebastian
2016-02-01
In this study, we investigate the potential of sea ice segmentation by C- and X-band multi-polarisation synthetic aperture radar (SAR) features during late summer. Five high-resolution satellite SAR scenes were recorded in the Fram Strait covering iceberg-fast first-year and old sea ice during a week with air temperatures varying around 0 °C. Sea ice thickness, surface roughness and aerial photographs were collected during a helicopter flight at the site. Six polarimetric SAR features were extracted for each of the scenes. The ability of the individual SAR features to discriminate between sea ice types and their temporal consistency were examined. All SAR features were found to add value to sea ice type discrimination. Relative kurtosis, geometric brightness, cross-polarisation ratio and co-polarisation correlation angle were found to be temporally consistent in the investigated period, while co-polarisation ratio and co-polarisation correlation magnitude were found to be temporally inconsistent. An automatic feature-based segmentation algorithm was tested both for a full SAR feature set and for a reduced SAR feature set limited to temporally consistent features. In C band, the algorithm produced a good late-summer sea ice segmentation, separating the scenes into segments that could be associated with different sea ice types in the next step. The X-band performance was slightly poorer. Excluding temporally inconsistent SAR features improved the segmentation in one of the X-band scenes.
NASA Astrophysics Data System (ADS)
Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas
1996-04-01
The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.
Automated Creation of Labeled Pointcloud Datasets in Support of Machine-Learning Based Perception
2017-12-01
computationally intensive 3D vector math and took more than ten seconds to segment a single LIDAR frame from the HDL-32e with the Dell XPS15 9650’s Intel...Core i7 CPU. Depth Clustering avoids the computationally intensive 3D vector math of Euclidean Clustering-based DON segmentation and, instead
Brain tumor classification and segmentation using sparse coding and dictionary learning.
Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo
2016-08-01
This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.
User-assisted video segmentation system for visual communication
NASA Astrophysics Data System (ADS)
Wu, Zhengping; Chen, Chun
2002-01-01
Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.
NASA Astrophysics Data System (ADS)
Paino, A.; Keller, J.; Popescu, M.; Stone, K.
2014-06-01
In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.
Volonghi, Paola; Tresoldi, Daniele; Cadioli, Marcello; Usuelli, Antonio M; Ponzini, Raffaele; Morbiducci, Umberto; Esposito, Antonio; Rizzo, Giovanna
2016-02-01
To propose and assess a new method that automatically extracts a three-dimensional (3D) geometric model of the thoracic aorta (TA) from 3D cine phase contrast MRI (PCMRI) acquisitions. The proposed method is composed of two steps: segmentation of the TA and creation of the 3D geometric model. The segmentation algorithm, based on Level Set, was set and applied to healthy subjects acquired in three different modalities (with and without SENSE reduction factors). Accuracy was evaluated using standard quality indices. The 3D model is characterized by the vessel surface mesh and its centerline; the comparison of models obtained from the three different datasets was also carried out in terms of radius of curvature (RC) and average tortuosity (AT). In all datasets, the segmentation quality indices confirmed very good agreement between manual and automatic contours (average symmetric distance < 1.44 mm, DICE Similarity Coefficient > 0.88). The 3D models extracted from the three datasets were found to be comparable, with differences of less than 10% for RC and 11% for AT. Our method was found effective on PCMRI data to provide a 3D geometric model of the TA, to support morphometric and hemodynamic characterization of the aorta. © 2015 Wiley Periodicals, Inc.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”
2016-01-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540