Sample records for skin pixel classification

  1. Rethinking Skin Lesion Segmentation in a Convolutional Classifier.

    PubMed

    Burdick, Jack; Marques, Oge; Weinthal, Janet; Furht, Borko

    2017-10-18

    Melanoma is a fatal form of skin cancer when left undiagnosed. Computer-aided diagnosis systems powered by convolutional neural networks (CNNs) can improve diagnostic accuracy and save lives. CNNs have been successfully used in both skin lesion segmentation and classification. For reasons heretofore unclear, previous works have found image segmentation to be, conflictingly, both detrimental and beneficial to skin lesion classification. We investigate the effect of expanding the segmentation border to include pixels surrounding the target lesion. Ostensibly, segmenting a target skin lesion will remove inessential information, non-lesion skin, and artifacts to aid in classification. Our results indicate that segmentation border enlargement produces, to a certain degree, better results across all metrics of interest when using a convolutional based classifier built using the transfer learning paradigm. Consequently, preprocessing methods which produce borders larger than the actual lesion can potentially improve classifier performance, more than both perfect segmentation, using dermatologist created ground truth masks, and no segmentation altogether.

  2. A Hybrid Color Space for Skin Detection Using Genetic Algorithm Heuristic Search and Principal Component Analysis Technique

    PubMed Central

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377

  3. A Study of Hand Back Skin Texture Patterns for Personal Identification and Gender Classification

    PubMed Central

    Xie, Jin; Zhang, Lei; You, Jane; Zhang, David; Qu, Xiaofeng

    2012-01-01

    Human hand back skin texture (HBST) is often consistent for a person and distinctive from person to person. In this paper, we study the HBST pattern recognition problem with applications to personal identification and gender classification. A specially designed system is developed to capture HBST images, and an HBST image database was established, which consists of 1,920 images from 80 persons (160 hands). An efficient texton learning based method is then presented to classify the HBST patterns. First, textons are learned in the space of filter bank responses from a set of training images using the l1 -minimization based sparse representation (SR) technique. Then, under the SR framework, we represent the feature vector at each pixel over the learned dictionary to construct a representation coefficient histogram. Finally, the coefficient histogram is used as skin texture feature for classification. Experiments on personal identification and gender classification are performed by using the established HBST database. The results show that HBST can be used to assist human identification and gender classification. PMID:23012512

  4. Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.

    PubMed

    Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan

    2018-06-01

    Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Sub-pixel image classification for forest types in East Texas

    NASA Astrophysics Data System (ADS)

    Westbrook, Joey

    Sub-pixel classification is the extraction of information about the proportion of individual materials of interest within a pixel. Landcover classification at the sub-pixel scale provides more discrimination than traditional per-pixel multispectral classifiers for pixels where the material of interest is mixed with other materials. It allows for the un-mixing of pixels to show the proportion of each material of interest. The materials of interest for this study are pine, hardwood, mixed forest and non-forest. The goal of this project was to perform a sub-pixel classification, which allows a pixel to have multiple labels, and compare the result to a traditional supervised classification, which allows a pixel to have only one label. The satellite image used was a Landsat 5 Thematic Mapper (TM) scene of the Stephen F. Austin Experimental Forest in Nacogdoches County, Texas and the four cover type classes are pine, hardwood, mixed forest and non-forest. Once classified, a multi-layer raster datasets was created that comprised four raster layers where each layer showed the percentage of that cover type within the pixel area. Percentage cover type maps were then produced and the accuracy of each was assessed using a fuzzy error matrix for the sub-pixel classifications, and the results were compared to the supervised classification in which a traditional error matrix was used. The overall accuracy of the sub-pixel classification using the aerial photo for both training and reference data had the highest (65% overall) out of the three sub-pixel classifications. This was understandable because the analyst can visually observe the cover types actually on the ground for training data and reference data, whereas using the FIA (Forest Inventory and Analysis) plot data, the analyst must assume that an entire pixel contains the exact percentage of a cover type found in a plot. An increase in accuracy was found after reclassifying each sub-pixel classification from nine classes with 10 percent interval each to five classes with 20 percent interval each. When compared to the supervised classification which has a satisfactory overall accuracy of 90%, none of the sub-pixel classification achieved the same level. However, since traditional per-pixel classifiers assign only one label to pixels throughout the landscape while sub-pixel classifications assign multiple labels to each pixel, the traditional 85% accuracy of acceptance for pixel-based classifications should not apply to sub-pixel classifications. More research is needed in order to define the level of accuracy that is deemed acceptable for sub-pixel classifications.

  6. Quantitative evaluation methods of skin condition based on texture feature parameters.

    PubMed

    Pang, Hui; Chen, Tianhua; Wang, Xiaoyi; Chang, Zhineng; Shao, Siqi; Zhao, Jing

    2017-03-01

    In order to quantitatively evaluate the improvement of the skin condition after using skin care products and beauty, a quantitative evaluation method for skin surface state and texture is presented, which is convenient, fast and non-destructive. Human skin images were collected by image sensors. Firstly, the median filter of the 3 × 3 window is used and then the location of the hairy pixels on the skin is accurately detected according to the gray mean value and color information. The bilinear interpolation is used to modify the gray value of the hairy pixels in order to eliminate the negative effect of noise and tiny hairs on the texture. After the above pretreatment, the gray level co-occurrence matrix (GLCM) is calculated. On the basis of this, the four characteristic parameters, including the second moment, contrast, entropy and correlation, and their mean value are calculated at 45 ° intervals. The quantitative evaluation model of skin texture based on GLCM is established, which can calculate the comprehensive parameters of skin condition. Experiments show that using this method evaluates the skin condition, both based on biochemical indicators of skin evaluation methods in line, but also fully consistent with the human visual experience. This method overcomes the shortcomings of the biochemical evaluation method of skin damage and long waiting time, also the subjectivity and fuzziness of the visual evaluation, which achieves the non-destructive, rapid and quantitative evaluation of skin condition. It can be used for health assessment or classification of the skin condition, also can quantitatively evaluate the subtle improvement of skin condition after using skin care products or stage beauty.

  7. Hair segmentation using adaptive threshold from edge and branch length measures.

    PubMed

    Lee, Ian; Du, Xian; Anthony, Brian

    2017-10-01

    Non-invasive imaging techniques allow the monitoring of skin structure and diagnosis of skin diseases in clinical applications. However, hair in skin images hampers the imaging and classification of the skin structure of interest. Although many hair segmentation methods have been proposed for digital hair removal, a major challenge in hair segmentation remains in detecting hairs that are thin, overlapping, of similar contrast or color to underlying skin, or overlaid on highly-textured skin structure. To solve the problem, we present an automatic hair segmentation method that uses edge density (ED) and mean branch length (MBL) to measure hair. First, hair is detected by the integration of top-hat transform and modified second-order Gaussian filter. Second, we employ a robust adaptive threshold of ED and MBL to generate a hair mask. Third, the hair mask is refined by k-NN classification of hair and skin pixels. The proposed algorithm was tested using two datasets of healthy skin images and lesion images respectively. These datasets were taken from different imaging platforms in various illumination levels and varying skin colors. We compared the hair detection and segmentation results from our algorithm and six other hair segmentation methods of state of the art. Our method exhibits high value of sensitivity: 75% and specificity: 95%, which indicates significantly higher accuracy and better balance between true positive and false positive detection than the other methods. Published by Elsevier Ltd.

  8. CCD-Based Skinning Injury Recognition on Potato Tubers (Solanum tuberosum L.): A Comparison between Visible and Biospeckle Imaging

    PubMed Central

    Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin

    2016-01-01

    Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging. PMID:27763555

  9. CCD-Based Skinning Injury Recognition on Potato Tubers (Solanum tuberosum L.): A Comparison between Visible and Biospeckle Imaging.

    PubMed

    Gao, Yingwang; Geng, Jinfeng; Rao, Xiuqin; Ying, Yibin

    2016-10-18

    Skinning injury on potato tubers is a kind of superficial wound that is generally inflicted by mechanical forces during harvest and postharvest handling operations. Though skinning injury is pervasive and obstructive, its detection is very limited. This study attempted to identify injured skin using two CCD (Charge Coupled Device) sensor-based machine vision technologies, i.e., visible imaging and biospeckle imaging. The identification of skinning injury was realized via exploiting features extracted from varied ROIs (Region of Interests). The features extracted from visible images were pixel-wise color and texture features, while region-wise BA (Biospeckle Activity) was calculated from biospeckle imaging. In addition, the calculation of BA using varied numbers of speckle patterns were compared. Finally, extracted features were implemented into classifiers of LS-SVM (Least Square Support Vector Machine) and BLR (Binary Logistic Regression), respectively. Results showed that color features performed better than texture features in classifying sound skin and injured skin, especially for injured skin stored no less than 1 day, with the average classification accuracy of 90%. Image capturing and processing efficiency can be speeded up in biospeckle imaging, with captured 512 frames reduced to 125 frames. Classification results obtained based on the feature of BA were acceptable for early skinning injury stored within 1 day, with the accuracy of 88.10%. It is concluded that skinning injury can be recognized by visible and biospeckle imaging during different stages. Visible imaging has the aptitude in recognizing stale skinning injury, while fresh injury can be discriminated by biospeckle imaging.

  10. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    NASA Technical Reports Server (NTRS)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post-classification steps. Within this chapter, each of the four approaches is described in terms of scale and accuracy classifying urban land use and urban land cover; and for its range of urban applications. We demonstrate the overview of four main classification groups in Figure 1 while Table 1 details the approaches with respect to classification requirements and procedures (e.g., reflectance conversion, steps before training sample selection, training samples, spatial approaches commonly used, classifiers, primary inputs for classification, output structures, number of output layers, and accuracy assessment). The chapter concludes with a brief summary of the methods reviewed and the challenges that remain in developing new classification methods for improving the efficiency and accuracy of mapping urban areas.

  11. Implementation of Nearest Neighbor using HSV to Identify Skin Disease

    NASA Astrophysics Data System (ADS)

    Gerhana, Y. A.; Zulfikar, W. B.; Ramdani, A. H.; Ramdhani, M. A.

    2018-01-01

    Today, Android is one of the most widely used operating system in the world. Most of android device has a camera that could capture an image, this feature could be optimized to identify skin disease. The disease is one of health problem caused by bacterium, fungi, and virus. The symptoms of skin disease usually visible. In this work, the symptoms that captured as image contains HSV in every pixel of the image. HSV can extracted and then calculate to earn euclidean value. The value compared using nearest neighbor algorithm to discover closer value between image testing and image training to get highest value that decide class label or type of skin disease. The testing result show that 166 of 200 or about 80% is accurate. There are some reasons that influence the result of classification model like number of image training and quality of android device’s camera.

  12. Crown-level tree species classification from AISA hyperspectral imagery using an innovative pixel-weighting approach

    NASA Astrophysics Data System (ADS)

    Liu, Haijian; Wu, Changshan

    2018-06-01

    Crown-level tree species classification is a challenging task due to the spectral similarity among different tree species. Shadow, underlying objects, and other materials within a crown may decrease the purity of extracted crown spectra and further reduce classification accuracy. To address this problem, an innovative pixel-weighting approach was developed for tree species classification at the crown level. The method utilized high density discrete LiDAR data for individual tree delineation and Airborne Imaging Spectrometer for Applications (AISA) hyperspectral imagery for pure crown-scale spectra extraction. Specifically, three steps were included: 1) individual tree identification using LiDAR data, 2) pixel-weighted representative crown spectra calculation using hyperspectral imagery, with which pixel-based illuminated-leaf fractions estimated using a linear spectral mixture analysis (LSMA) were employed as weighted factors, and 3) representative spectra based tree species classification was performed through applying a support vector machine (SVM) approach. Analysis of results suggests that the developed pixel-weighting approach (OA = 82.12%, Kc = 0.74) performed better than treetop-based (OA = 70.86%, Kc = 0.58) and pixel-majority methods (OA = 72.26, Kc = 0.62) in terms of classification accuracy. McNemar tests indicated the differences in accuracy between pixel-weighting and treetop-based approaches as well as that between pixel-weighting and pixel-majority approaches were statistically significant.

  13. Multiple Spectral-Spatial Classification Approach for Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2010-01-01

    A .new multiple classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region, with the corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker selection procedure, each of them combining the results of a pixel-wise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification -driven marker and forms a region in the spectral -spatial classification: map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies, when compared to previously proposed classification techniques.

  14. a Data Field Method for Urban Remotely Sensed Imagery Classification Considering Spatial Correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Qin, K.; Zeng, C.; Zhang, E. B.; Yue, M. X.; Tong, X.

    2016-06-01

    Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary's C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM) for the classification of multi-features (e.g. the spectral feature and spatial correlation feature). In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.

  15. On-chip skin color detection using a triple-well CMOS process

    NASA Astrophysics Data System (ADS)

    Boussaid, Farid; Chai, Douglas; Bouzerdoum, Abdesselam

    2004-03-01

    In this paper, a current-mode VLSI architecture enabling on read-out skin detection without the need for any on-chip memory elements is proposed. An important feature of the proposed architecture is that it removes the need for demosaicing. Color separation is achieved using the strong wavelength dependence of the absorption coefficient in silicon. This wavelength dependence causes a very shallow absorption of blue light and enables red light to penetrate deeply in silicon. A triple-well process, allowing a P-well to be placed inside an N-well, is chosen to fabricate three vertically integrated photodiodes acting as the RGB color detector for each pixel. Pixels of an input RGB image are classified as skin or non-skin pixels using a statistical skin color model, chosen to offer an acceptable trade-off between skin detection performance and implementation complexity. A single processing unit is used to classify all pixels of the input RGB image. This results in reduced mismatch and also in an increased pixel fill-factor. Furthermore, the proposed current-mode architecture is programmable, allowing external control of all classifier parameters to compensate for mismatch and changing lighting conditions.

  16. Classification of multispectral image data by the Binary Diamond neural network and by nonparametric, pixel-by-pixel methods

    NASA Technical Reports Server (NTRS)

    Salu, Yehuda; Tilton, James

    1993-01-01

    The classification of multispectral image data obtained from satellites has become an important tool for generating ground cover maps. This study deals with the application of nonparametric pixel-by-pixel classification methods in the classification of pixels, based on their multispectral data. A new neural network, the Binary Diamond, is introduced, and its performance is compared with a nearest neighbor algorithm and a back-propagation network. The Binary Diamond is a multilayer, feed-forward neural network, which learns from examples in unsupervised, 'one-shot' mode. It recruits its neurons according to the actual training set, as it learns. The comparisons of the algorithms were done by using a realistic data base, consisting of approximately 90,000 Landsat 4 Thematic Mapper pixels. The Binary Diamond and the nearest neighbor performances were close, with some advantages to the Binary Diamond. The performance of the back-propagation network lagged behind. An efficient nearest neighbor algorithm, the binned nearest neighbor, is described. Ways for improving the performances, such as merging categories, and analyzing nonboundary pixels, are addressed and evaluated.

  17. Statistical analysis of spectral data: a methodology for designing an intelligent monitoring system for the diabetic foot

    NASA Astrophysics Data System (ADS)

    Liu, Chanjuan; van Netten, Jaap J.; Klein, Marvin E.; van Baal, Jeff G.; Bus, Sicco A.; van der Heijden, Ferdi

    2013-12-01

    Early detection of (pre-)signs of ulceration on a diabetic foot is valuable for clinical practice. Hyperspectral imaging is a promising technique for detection and classification of such (pre-)signs. However, the number of the spectral bands should be limited to avoid overfitting, which is critical for pixel classification with hyperspectral image data. The goal was to design a detector/classifier based on spectral imaging (SI) with a small number of optical bandpass filters. The performance and stability of the design were also investigated. The selection of the bandpass filters boils down to a feature selection problem. A dataset was built, containing reflectance spectra of 227 skin spots from 64 patients, measured with a spectrometer. Each skin spot was annotated manually by clinicians as "healthy" or a specific (pre-)sign of ulceration. Statistical analysis on the data set showed the number of required filters is between 3 and 7, depending on additional constraints on the filter set. The stability analysis revealed that shot noise was the most critical factor affecting the classification performance. It indicated that this impact could be avoided in future SI systems with a camera sensor whose saturation level is higher than 106, or by postimage processing.

  18. Object based image analysis for the classification of the growth stages of Avocado crop, in Michoacán State, Mexico

    NASA Astrophysics Data System (ADS)

    Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.

    2014-11-01

    This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.

  19. Analysis of Landsat-4 Thematic Mapper data for classification of forest stands in Baldwin County, Alabama

    NASA Technical Reports Server (NTRS)

    Hill, C. L.

    1984-01-01

    A computer-implemented classification has been derived from Landsat-4 Thematic Mapper data acquired over Baldwin County, Alabama on January 15, 1983. One set of spectral signatures was developed from the data by utilizing a 3x3 pixel sliding window approach. An analysis of the classification produced from this technique identified forested areas. Additional information regarding only the forested areas. Additional information regarding only the forested areas was extracted by employing a pixel-by-pixel signature development program which derived spectral statistics only for pixels within the forested land covers. The spectral statistics from both approaches were integrated and the data classified. This classification was evaluated by comparing the spectral classes produced from the data against corresponding ground verification polygons. This iterative data analysis technique resulted in an overall classification accuracy of 88.4 percent correct for slash pine, young pine, loblolly pine, natural pine, and mixed hardwood-pine. An accuracy assessment matrix has been produced for the classification.

  20. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision.

    PubMed

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-22

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on-off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  1. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision

    NASA Astrophysics Data System (ADS)

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-01

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on–off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  2. Efficient graph-cut tattoo segmentation

    NASA Astrophysics Data System (ADS)

    Kim, Joonsoo; Parra, Albert; Li, He; Delp, Edward J.

    2015-03-01

    Law enforcement is interested in exploiting tattoos as an information source to identify, track and prevent gang-related crimes. Many tattoo image retrieval systems have been described. In a retrieval system tattoo segmentation is an important step for retrieval accuracy since segmentation removes background information in a tattoo image. Existing segmentation methods do not extract the tattoo very well when the background includes textures and color similar to skin tones. In this paper we describe a tattoo segmentation approach by determining skin pixels in regions near the tattoo. In these regions graph-cut segmentation using a skin color model and a visual saliency map is used to find skin pixels. After segmentation we determine which set of skin pixels are connected with each other that form a closed contour including a tattoo. The regions surrounded by the closed contours are considered tattoo regions. Our method segments tattoos well when the background includes textures and color similar to skin.

  3. Land Cover Analysis by Using Pixel-Based and Object-Based Image Classification Method in Bogor

    NASA Astrophysics Data System (ADS)

    Amalisana, Birohmatin; Rokhmatullah; Hernina, Revi

    2017-12-01

    The advantage of image classification is to provide earth’s surface information like landcover and time-series changes. Nowadays, pixel-based image classification technique is commonly performed with variety of algorithm such as minimum distance, parallelepiped, maximum likelihood, mahalanobis distance. On the other hand, landcover classification can also be acquired by using object-based image classification technique. In addition, object-based classification uses image segmentation from parameter such as scale, form, colour, smoothness and compactness. This research is aimed to compare the result of landcover classification and its change detection between parallelepiped pixel-based and object-based classification method. Location of this research is Bogor with 20 years range of observation from 1996 until 2016. This region is famous as urban areas which continuously change due to its rapid development, so that time-series landcover information of this region will be interesting.

  4. Scattering property based contextual PolSAR speckle filter

    NASA Astrophysics Data System (ADS)

    Mullissa, Adugna G.; Tolpekin, Valentyn; Stein, Alfred

    2017-12-01

    Reliability of the scattering model based polarimetric SAR (PolSAR) speckle filter depends upon the accurate decomposition and classification of the scattering mechanisms. This paper presents an improved scattering property based contextual speckle filter based upon an iterative classification of the scattering mechanisms. It applies a Cloude-Pottier eigenvalue-eigenvector decomposition and a fuzzy H/α classification to determine the scattering mechanisms on a pre-estimate of the coherency matrix. The H/α classification identifies pixels with homogeneous scattering properties. A coarse pixel selection rule groups pixels that are either single bounce, double bounce or volume scatterers. A fine pixel selection rule is applied to pixels within each canonical scattering mechanism. We filter the PolSAR data and depending on the type of image scene (urban or rural) use either the coarse or fine pixel selection rule. Iterative refinement of the Wishart H/α classification reduces the speckle in the PolSAR data. Effectiveness of this new filter is demonstrated by using both simulated and real PolSAR data. It is compared with the refined Lee filter, the scattering model based filter and the non-local means filter. The study concludes that the proposed filter compares favorably with other polarimetric speckle filters in preserving polarimetric information, point scatterers and subtle features in PolSAR data.

  5. GENIE: a hybrid genetic algorithm for feature classification in multispectral images

    NASA Astrophysics Data System (ADS)

    Perkins, Simon J.; Theiler, James P.; Brumby, Steven P.; Harvey, Neal R.; Porter, Reid B.; Szymanski, John J.; Bloch, Jeffrey J.

    2000-10-01

    We consider the problem of pixel-by-pixel classification of a multi- spectral image using supervised learning. Conventional spuervised classification techniques such as maximum likelihood classification and less conventional ones s uch as neural networks, typically base such classifications solely on the spectral components of each pixel. It is easy to see why: the color of a pixel provides a nice, bounded, fixed dimensional space in which these classifiers work well. It is often the case however, that spectral information alone is not sufficient to correctly classify a pixel. Maybe spatial neighborhood information is required as well. Or maybe the raw spectral components do not themselves make for easy classification, but some arithmetic combination of them would. In either of these cases we have the problem of selecting suitable spatial, spectral or spatio-spectral features that allow the classifier to do its job well. The number of all possible such features is extremely large. How can we select a suitable subset? We have developed GENIE, a hybrid learning system that combines a genetic algorithm that searches a space of image processing operations for a set that can produce suitable feature planes, and a more conventional classifier which uses those feature planes to output a final classification. In this paper we show that the use of a hybrid GA provides significant advantages over using either a GA alone or more conventional classification methods alone. We present results using high-resolution IKONOS data, looking for regions of burned forest and for roads.

  6. A comparative analysis of pixel- and object-based detection of landslides from very high-resolution images

    NASA Astrophysics Data System (ADS)

    Keyport, Ren N.; Oommen, Thomas; Martha, Tapas R.; Sajinkumar, K. S.; Gierke, John S.

    2018-02-01

    A comparative analysis of landslides detected by pixel-based and object-oriented analysis (OOA) methods was performed using very high-resolution (VHR) remotely sensed aerial images for the San Juan La Laguna, Guatemala, which witnessed widespread devastation during the 2005 Hurricane Stan. A 3-band orthophoto of 0.5 m spatial resolution together with a 115 field-based landslide inventory were used for the analysis. A binary reference was assigned with a zero value for landslide and unity for non-landslide pixels. The pixel-based analysis was performed using unsupervised classification, which resulted in 11 different trial classes. Detection of landslides using OOA includes 2-step K-means clustering to eliminate regions based on brightness; elimination of false positives using object properties such as rectangular fit, compactness, length/width ratio, mean difference of objects, and slope angle. Both overall accuracy and F-score for OOA methods outperformed pixel-based unsupervised classification methods in both landslide and non-landslide classes. The overall accuracy for OOA and pixel-based unsupervised classification was 96.5% and 94.3%, respectively, whereas the best F-score for landslide identification for OOA and pixel-based unsupervised methods: were 84.3% and 77.9%, respectively.Results indicate that the OOA is able to identify the majority of landslides with a few false positive when compared to pixel-based unsupervised classification.

  7. Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks

    PubMed Central

    Xu, Xin; Gui, Rong; Pu, Fangling

    2018-01-01

    Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods. PMID:29510499

  8. Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks.

    PubMed

    Wang, Lei; Xu, Xin; Dong, Hao; Gui, Rong; Pu, Fangling

    2018-03-03

    Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods.

  9. Defect detection and classification of galvanized stamping parts based on fully convolution neural network

    NASA Astrophysics Data System (ADS)

    Xiao, Zhitao; Leng, Yanyi; Geng, Lei; Xi, Jiangtao

    2018-04-01

    In this paper, a new convolution neural network method is proposed for the inspection and classification of galvanized stamping parts. Firstly, all workpieces are divided into normal and defective by image processing, and then the defective workpieces extracted from the region of interest (ROI) area are input to the trained fully convolutional networks (FCN). The network utilizes an end-to-end and pixel-to-pixel training convolution network that is currently the most advanced technology in semantic segmentation, predicts result of each pixel. Secondly, we mark the different pixel values of the workpiece, defect and background for the training image, and use the pixel value and the number of pixels to realize the recognition of the defects of the output picture. Finally, the defect area's threshold depended on the needs of the project is set to achieve the specific classification of the workpiece. The experiment results show that the proposed method can successfully achieve defect detection and classification of galvanized stamping parts under ordinary camera and illumination conditions, and its accuracy can reach 99.6%. Moreover, it overcomes the problem of complex image preprocessing and difficult feature extraction and performs better adaptability.

  10. Supervised pixel classification using a feature space derived from an artificial visual system

    NASA Technical Reports Server (NTRS)

    Baxter, Lisa C.; Coggins, James M.

    1991-01-01

    Image segmentation involves labelling pixels according to their membership in image regions. This requires the understanding of what a region is. Using supervised pixel classification, the paper investigates how groups of pixels labelled manually according to perceived image semantics map onto the feature space created by an Artificial Visual System. Multiscale structure of regions are investigated and it is shown that pixels form clusters based on their geometric roles in the image intensity function, not by image semantics. A tentative abstract definition of a 'region' is proposed based on this behavior.

  11. Medical diagnosis system and method with multispectral imaging. [depth of burns and optical density of the skin

    NASA Technical Reports Server (NTRS)

    Anselmo, V. J.; Reilly, T. H. (Inventor)

    1979-01-01

    A skin diagnosis system includes a scanning and optical arrangement whereby light reflected from each incremental area (pixel) of the skin is directed simultaneously to three separate light filters, e.g., IR, red, and green. As a result, the three devices simultaneously produce three signals which are directly related to the reflectance of light of different wavelengths from the corresponding pixel. These three signals for each pixel after processing are used as inputs to one or more output devices to produce a visual color display and/or a hard copy color print, for one usable as a diagnostic aid by a physician.

  12. Digital mammography: observer performance study of the effects of pixel size on radiologists' characterization of malignant and benign microcalcifications

    NASA Astrophysics Data System (ADS)

    Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Adler, Dorit D.; Blane, Caroline E.; Joynt, Lynn K.; Paramagul, Chintana; Roubidoux, Marilyn A.; Wilson, Todd E.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.

    1999-05-01

    A receiver operating characteristic (ROC) experiment was conducted to evaluate the effects of pixel size on the characterization of mammographic microcalcifications. Digital mammograms were obtained by digitizing screen-film mammograms with a laser film scanner. One hundred twelve two-view mammograms with biopsy-proven microcalcifications were digitized at a pixel size of 35 micrometer X 35 micrometer. A region of interest (ROI) containing the microcalcifications was extracted from each image. ROI images with pixel sizes of 70 micrometers, 105 micrometers, and 140 micrometers were derived from the ROI of 35 micrometer pixel size by averaging 2 X 2, 3 X 3, and 4 X 4 neighboring pixels, respectively. The ROI images were printed on film with a laser imager. Seven MQSA-approved radiologists participated as observers. The likelihood of malignancy of the microcalcifications was rated on a 10-point confidence rating scale and analyzed with ROC methodology. The classification accuracy was quantified by the area, Az, under the ROC curve. The statistical significance of the differences in the Az values for different pixel sizes was estimated with the Dorfman-Berbaum-Metz (DBM) method for multi-reader, multi-case ROC data. It was found that five of the seven radiologists demonstrated a higher classification accuracy with the 70 micrometer or 105 micrometer images. The average Az also showed a higher classification accuracy in the range of 70 to 105 micrometer pixel size. However, the differences in A(subscript z/ between different pixel sizes did not achieve statistical significance. The low specificity of image features of microcalcifications an the large interobserver and intraobserver variabilities may have contributed to the relatively weak dependence of classification accuracy on pixel size.

  13. Hyperspectral image classification by a variable interval spectral average and spectral curve matching combined algorithm

    NASA Astrophysics Data System (ADS)

    Senthil Kumar, A.; Keerthi, V.; Manjunath, A. S.; Werff, Harald van der; Meer, Freek van der

    2010-08-01

    Classification of hyperspectral images has been receiving considerable attention with many new applications reported from commercial and military sectors. Hyperspectral images are composed of a large number of spectral channels, and have the potential to deliver a great deal of information about a remotely sensed scene. However, in addition to high dimensionality, hyperspectral image classification is compounded with a coarse ground pixel size of the sensor for want of adequate sensor signal to noise ratio within a fine spectral passband. This makes multiple ground features jointly occupying a single pixel. Spectral mixture analysis typically begins with pixel classification with spectral matching techniques, followed by the use of spectral unmixing algorithms for estimating endmembers abundance values in the pixel. The spectral matching techniques are analogous to supervised pattern recognition approaches, and try to estimate some similarity between spectral signatures of the pixel and reference target. In this paper, we propose a spectral matching approach by combining two schemes—variable interval spectral average (VISA) method and spectral curve matching (SCM) method. The VISA method helps to detect transient spectral features at different scales of spectral windows, while the SCM method finds a match between these features of the pixel and one of library spectra by least square fitting. Here we also compare the performance of the combined algorithm with other spectral matching techniques using a simulated and the AVIRIS hyperspectral data sets. Our results indicate that the proposed combination technique exhibits a stronger performance over the other methods in the classification of both the pure and mixed class pixels simultaneously.

  14. Feature selection and definition for contours classification of thermograms in breast cancer detection

    NASA Astrophysics Data System (ADS)

    Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał; Oleszkiewicz, Witold; Cichosz, Paweł

    2016-09-01

    This contribution introduces the method of cancer pathologies detection on breast skin temperature distribution images. The use of thermosensitive foils applied to the breasts skin allows to create thermograms, which displays the amount of infrared energy emitted by all breast cells. The significant foci of hyperthermia or inflammation are typical for cancer cells. That foci can be recognized on thermograms as a contours, which are the areas of higher temperature. Every contour can be converted to a feature set that describe it, using the raw, central, Hu, outline, Fourier and colour moments of image pixels processing. This paper defines also the new way of describing a set of contours through theirs neighbourhood relations. Contribution introduces moreover the way of ranking and selecting most relevant features. Authors used Neural Network with Gevrey`s concept and recursive feature elimination, to estimate feature importance.

  15. Identification of coffee bean varieties using hyperspectral imaging: influence of preprocessing methods and pixel-wise spectra analysis.

    PubMed

    Zhang, Chu; Liu, Fei; He, Yong

    2018-02-01

    Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.

  16. Investigation of correlation classification techniques

    NASA Technical Reports Server (NTRS)

    Haskell, R. E.

    1975-01-01

    A two-step classification algorithm for processing multispectral scanner data was developed and tested. The first step is a single pass clustering algorithm that assigns each pixel, based on its spectral signature, to a particular cluster. The output of that step is a cluster tape in which a single integer is associated with each pixel. The cluster tape is used as the input to the second step, where ground truth information is used to classify each cluster using an iterative method of potentials. Once the clusters have been assigned to classes the cluster tape is read pixel-by-pixel and an output tape is produced in which each pixel is assigned to its proper class. In addition to the digital classification programs, a method of using correlation clustering to process multispectral scanner data in real time by means of an interactive color video display is also described.

  17. Adaptive skin segmentation via feature-based face detection

    NASA Astrophysics Data System (ADS)

    Taylor, Michael J.; Morris, Tim

    2014-05-01

    Variations in illumination can have significant effects on the apparent colour of skin, which can be damaging to the efficacy of any colour-based segmentation approach. We attempt to overcome this issue by presenting a new adaptive approach, capable of generating skin colour models at run-time. Our approach adopts a Viola-Jones feature-based face detector, in a moderate-recall, high-precision configuration, to sample faces within an image, with an emphasis on avoiding potentially detrimental false positives. From these samples, we extract a set of pixels that are likely to be from skin regions, filter them according to their relative luma values in an attempt to eliminate typical non-skin facial features (eyes, mouths, nostrils, etc.), and hence establish a set of pixels that we can be confident represent skin. Using this representative set, we train a unimodal Gaussian function to model the skin colour in the given image in the normalised rg colour space - a combination of modelling approach and colour space that benefits us in a number of ways. A generated function can subsequently be applied to every pixel in the given image, and, hence, the probability that any given pixel represents skin can be determined. Segmentation of the skin, therefore, can be as simple as applying a binary threshold to the calculated probabilities. In this paper, we touch upon a number of existing approaches, describe the methods behind our new system, present the results of its application to arbitrary images of people with detectable faces, which we have found to be extremely encouraging, and investigate its potential to be used as part of real-time systems.

  18. Where can pixel counting area estimates meet user-defined accuracy requirements?

    NASA Astrophysics Data System (ADS)

    Waldner, François; Defourny, Pierre

    2017-08-01

    Pixel counting is probably the most popular way to estimate class areas from satellite-derived maps. It involves determining the number of pixels allocated to a specific thematic class and multiplying it by the pixel area. In the presence of asymmetric classification errors, the pixel counting estimator is biased. The overarching objective of this article is to define the applicability conditions of pixel counting so that the estimates are below a user-defined accuracy target. By reasoning in terms of landscape fragmentation and spatial resolution, the proposed framework decouples the resolution bias and the classifier bias from the overall classification bias. The consequence is that prior to any classification, part of the tolerated bias is already committed due to the choice of the spatial resolution of the imagery. How much classification bias is affordable depends on the joint interaction of spatial resolution and fragmentation. The method was implemented over South Africa for cropland mapping, demonstrating its operational applicability. Particular attention was paid to modeling a realistic sensor's spatial response by explicitly accounting for the effect of its point spread function. The diagnostic capabilities offered by this framework have multiple potential domains of application such as guiding users in their choice of imagery and providing guidelines for space agencies to elaborate the design specifications of future instruments.

  19. Definition of Linear Color Models in the RGB Vector Color Space to Detect Red Peaches in Orchard Images Taken under Natural Illumination

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates. PMID:22969369

  20. Definition of linear color models in the RGB vector color space to detect red peaches in orchard images taken under natural illumination.

    PubMed

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates.

  1. Modeling misregistration and related effects on multispectral classification

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1981-01-01

    The effects of misregistration on the multispectral classification accuracy when the scene registration accuracy is relaxed from 0.3 to 0.5 pixel are investigated. Noise, class separability, spatial transient response, and field size are considered simultaneously with misregistration in their effects on accuracy. Any noise due to the scene, sensor, or to the analog/digital conversion, causes a finite fraction of the measurements to fall outside of the classification limits, even within nominally uniform fields. Misregistration causes field borders in a given band or set of bands to be closer than expected to a given pixel, causing additional pixels to be misclassified due to the mixture of materials in the pixel. Simplified first order models of the various effects are presented, and are used to estimate the performance to be expected.

  2. Point spread function based classification of regions for linear digital tomosynthesis

    NASA Astrophysics Data System (ADS)

    Israni, Kenny; Avinash, Gopal; Li, Baojun

    2007-03-01

    In digital tomosynthesis, one of the limitations is the presence of out-of-plane blur due to the limited angle acquisition. The point spread function (PSF) characterizes blur in the imaging volume, and is shift-variant in tomosynthesis. The purpose of this research is to classify the tomosynthesis imaging volume into four different categories based on PSF-driven focus criteria. We considered linear tomosynthesis geometry and simple back projection algorithm for reconstruction. The three-dimensional PSF at every pixel in the imaging volume was determined. Intensity profiles were computed for every pixel by integrating the PSF-weighted intensities contained within the line segment defined by the PSF, at each slice. Classification rules based on these intensity profiles were used to categorize image regions. At background and low-frequency pixels, the derived intensity profiles were flat curves with relatively low and high maximum intensities respectively. At in-focus pixels, the maximum intensity of the profiles coincided with the PSF-weighted intensity of the pixel. At out-of-focus pixels, the PSF-weighted intensity of the pixel was always less than the maximum intensity of the profile. We validated our method using human observer classified regions as gold standard. Based on the computed and manual classifications, the mean sensitivity and specificity of the algorithm were 77+/-8.44% and 91+/-4.13% respectively (t=-0.64, p=0.56, DF=4). Such a classification algorithm may assist in mitigating out-of-focus blur from tomosynthesis image slices.

  3. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  4. Combination of support vector machine, artificial neural network and random forest for improving the classification of convective and stratiform rain using spectral features of SEVIRI data

    NASA Astrophysics Data System (ADS)

    Lazri, Mourad; Ameur, Soltane

    2018-05-01

    A model combining three classifiers, namely Support vector machine, Artificial neural network and Random forest (SAR) is designed for improving the classification of convective and stratiform rain. This model (SAR model) has been trained and then tested on a datasets derived from MSG-SEVIRI (Meteosat Second Generation-Spinning Enhanced Visible and Infrared Imager). Well-classified, mid-classified and misclassified pixels are determined from the combination of three classifiers. Mid-classified and misclassified pixels that are considered unreliable pixels are reclassified by using a novel training of the developed scheme. In this novel training, only the input data corresponding to the pixels in question to are used. This whole process is repeated a second time and applied to mid-classified and misclassified pixels separately. Learning and validation of the developed scheme are realized against co-located data observed by ground radar. The developed scheme outperformed different classifiers used separately and reached 97.40% of overall accuracy of classification.

  5. Ground truth management system to support multispectral scanner /MSS/ digital analysis

    NASA Technical Reports Server (NTRS)

    Coiner, J. C.; Ungar, S. G.

    1977-01-01

    A computerized geographic information system for management of ground truth has been designed and implemented to relate MSS classification results to in situ observations. The ground truth system transforms, generalizes and rectifies ground observations to conform to the pixel size and shape of high resolution MSS aircraft data. These observations can then be aggregated for comparison to lower resolution sensor data. Construction of a digital ground truth array allows direct pixel by pixel comparison between classification results of MSS data and ground truth. By making comparisons, analysts can identify spatial distribution of error within the MSS data as well as usual figures of merit for the classifications. Use of the ground truth system permits investigators to compare a variety of environmental or anthropogenic data, such as soil color or tillage patterns, with classification results and allows direct inclusion of such data into classification operations. To illustrate the system, examples from classification of simulated Thematic Mapper data for agricultural test sites in North Dakota and Kansas are provided.

  6. Enhancing spatial resolution of (18)F positron imaging with the Timepix detector by classification of primary fired pixels using support vector machine.

    PubMed

    Wang, Qian; Liu, Zhen; Ziegler, Sibylle I; Shi, Kuangyu

    2015-07-07

    Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by (18)F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [(18)F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6   ±   4.2 µm (energy weighted centroid approximation) to 132.3   ±   3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.

  7. Enhancing spatial resolution of 18F positron imaging with the Timepix detector by classification of primary fired pixels using support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Liu, Zhen; Ziegler, Sibylle I.; Shi, Kuangyu

    2015-07-01

    Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by 18F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [18F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6   ±   4.2 µm (energy weighted centroid approximation) to 132.3   ±   3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.

  8. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    PubMed Central

    Wang, Guizhou; Liu, Jianbo; He, Guojin

    2013-01-01

    This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808

  9. A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data

    NASA Astrophysics Data System (ADS)

    Gajda, Agnieszka; Wójtowicz-Nowakowska, Anna

    2013-04-01

    A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data Land cover maps are generally produced on the basis of high resolution imagery. Recently, LiDAR (Light Detection and Ranging) data have been brought into use in diverse applications including land cover mapping. In this study we attempted to assess the accuracy of land cover classification using both high resolution aerial imagery and LiDAR data (airborne laser scanning, ALS), testing two classification approaches: a pixel-based classification and object-oriented image analysis (OBIA). The study was conducted on three test areas (3 km2 each) in the administrative area of Kraków, Poland, along the course of the Vistula River. They represent three different dominating land cover types of the Vistula River valley. Test site 1 had a semi-natural vegetation, with riparian forests and shrubs, test site 2 represented a densely built-up area, and test site 3 was an industrial site. Point clouds from ALS and ortophotomaps were both captured in November 2007. Point cloud density was on average 16 pt/m2 and it contained additional information about intensity and encoded RGB values. Ortophotomaps had a spatial resolution of 10 cm. From point clouds two raster maps were generated: intensity (1) and (2) normalised Digital Surface Model (nDSM), both with the spatial resolution of 50 cm. To classify the aerial data, a supervised classification approach was selected. Pixel based classification was carried out in ERDAS Imagine software. Ortophotomaps and intensity and nDSM rasters were used in classification. 15 homogenous training areas representing each cover class were chosen. Classified pixels were clumped to avoid salt and pepper effect. Object oriented image object classification was carried out in eCognition software, which implements both the optical and ALS data. Elevation layers (intensity, firs/last reflection, etc.) were used at segmentation stage due to proper wages usage. Thus a more precise and unambiguous boundaries of segments (objects) were received. As a results of the classification 5 classes of land cover (buildings, water, high and low vegetation and others) were extracted. Both pixel-based image analysis and OBIA were conducted with a minimum mapping unit of 10m2. Results were validated on the basis on manual classification and random points (80 per test area), reference data set was manually interpreted using ortophotomaps and expert knowledge of the test site areas.

  10. Generalized procrustean image deformation for subtraction of mammograms

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Zheng, Bin; Chang, Yuan-Hsiang; Wang, Xiao Hui; Maitz, Glenn S.

    1999-05-01

    This project is a preliminary evaluation of two simple fully automatic nonlinear transformations which can map any mammographic image onto a reference image while guaranteeing registration of specific features. The first method automatically identifies skin lines, after which each pixel is given coordinates in the range [0,1] X [0,1], where the actual value of a coordinate is the fractional distance of the pixel between tissue boundaries in either the horizontal or vertical direction. This insures that skin lines are put in registration. The second method, which is the method of primary interest, automatically detects pectoral muscles, skin lines and nipple locations. For each image, a polar coordinate system is established with its origin at the intersection of the nipple axes line (NAL) and a line indicating the pectoral muscle. Points within a mammogram are identified by the angle of their position vector, relative to the NAL, and by their fractional distance between the origin and the skin line. This deforms mammograms in such a way that their pectoral lines, NALs and skin lines are all in registration. After images are deformed, their grayscales are adjusted by applying linear regression to pixel value pairs for corresponding tissue pixels. In a comparison of these methods to a previously reported 'translation/rotation' technique, evaluation of difference images clearly indicates that the polar coordinates method results in the most accurate registration of the transformations considered.

  11. The fragmented nature of tundra landscape

    NASA Astrophysics Data System (ADS)

    Virtanen, Tarmo; Ek, Malin

    2014-04-01

    The vegetation and land cover structure of tundra areas is fragmented when compared to other biomes. Thus, satellite images of high resolution are required for producing land cover classifications, in order to reveal the actual distribution of land cover types across these large and remote areas. We produced and compared different land cover classifications using three satellite images (QuickBird, Aster and Landsat TM5) with different pixel sizes (2.4 m, 15 m and 30 m pixel size, respectively). The study area, in north-eastern European Russia, was visited in July 2007 to obtain ground reference data. The QuickBird image was classified using supervised segmentation techniques, while the Aster and Landsat TM5 images were classified using a pixel-based supervised classification method. The QuickBird classification showed the highest accuracy when tested against field data, while the Aster image was generally more problematic to classify than the Landsat TM5 image. Use of smaller pixel sized images distinguished much greater levels of landscape fragmentation. The overall mean patch sizes in the QuickBird, Aster, and Landsat TM5-classifications were 871 m2, 2141 m2 and 7433 m2, respectively. In the QuickBird classification, the mean patch size of all the tundra and peatland vegetation classes was smaller than one pixel of the Landsat TM5 image. Water bodies and fens in particular occur in the landscape in small or elongated patches, and thus cannot be realistically classified from larger pixel sized images. Land cover patterns vary considerably at such a fine-scale, so that a lot of information is lost if only medium resolution satellite images are used. It is crucial to know the amount and spatial distribution of different vegetation types in arctic landscapes, as carbon dynamics and other climate related physical, geological and biological processes are known to vary greatly between vegetation types.

  12. Land Cover Classification in a Complex Urban-Rural Landscape with Quickbird Imagery

    PubMed Central

    Moran, Emilio Federico.

    2010-01-01

    High spatial resolution images have been increasingly used for urban land use/cover classification, but the high spectral variation within the same land cover, the spectral confusion among different land covers, and the shadow problem often lead to poor classification performance based on the traditional per-pixel spectral-based classification methods. This paper explores approaches to improve urban land cover classification with Quickbird imagery. Traditional per-pixel spectral-based supervised classification, incorporation of textural images and multispectral images, spectral-spatial classifier, and segmentation-based classification are examined in a relatively new developing urban landscape, Lucas do Rio Verde in Mato Grosso State, Brazil. This research shows that use of spatial information during the image classification procedure, either through the integrated use of textural and spectral images or through the use of segmentation-based classification method, can significantly improve land cover classification performance. PMID:21643433

  13. Impervious surface mapping with Quickbird imagery

    PubMed Central

    Lu, Dengsheng; Hetrick, Scott; Moran, Emilio

    2010-01-01

    This research selects two study areas with different urban developments, sizes, and spatial patterns to explore the suitable methods for mapping impervious surface distribution using Quickbird imagery. The selected methods include per-pixel based supervised classification, segmentation-based classification, and a hybrid method. A comparative analysis of the results indicates that per-pixel based supervised classification produces a large number of “salt-and-pepper” pixels, and segmentation based methods can significantly reduce this problem. However, neither method can effectively solve the spectral confusion of impervious surfaces with water/wetland and bare soils and the impacts of shadows. In order to accurately map impervious surface distribution from Quickbird images, manual editing is necessary and may be the only way to extract impervious surfaces from the confused land covers and the shadow problem. This research indicates that the hybrid method consisting of thresholding techniques, unsupervised classification and limited manual editing provides the best performance. PMID:21643434

  14. Spectral-spatial classification of hyperspectral imagery with cooperative game

    NASA Astrophysics Data System (ADS)

    Zhao, Ji; Zhong, Yanfei; Jia, Tianyi; Wang, Xinyu; Xu, Yao; Shu, Hong; Zhang, Liangpei

    2018-01-01

    Spectral-spatial classification is known to be an effective way to improve classification performance by integrating spectral information and spatial cues for hyperspectral imagery. In this paper, a game-theoretic spectral-spatial classification algorithm (GTA) using a conditional random field (CRF) model is presented, in which CRF is used to model the image considering the spatial contextual information, and a cooperative game is designed to obtain the labels. The algorithm establishes a one-to-one correspondence between image classification and game theory. The pixels of the image are considered as the players, and the labels are considered as the strategies in a game. Similar to the idea of soft classification, the uncertainty is considered to build the expected energy model in the first step. The local expected energy can be quickly calculated, based on a mixed strategy for the pixels, to establish the foundation for a cooperative game. Coalitions can then be formed by the designed merge rule based on the local expected energy, so that a majority game can be performed to make a coalition decision to obtain the label of each pixel. The experimental results on three hyperspectral data sets demonstrate the effectiveness of the proposed classification algorithm.

  15. Contribution of non-negative matrix factorization to the classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.

    2008-10-01

    Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.

  16. Rule-driven defect detection in CT images of hardwood logs

    Treesearch

    Erol Sarigul; A. Lynn Abbott; Daniel L. Schmoldt

    2000-01-01

    This paper deals with automated detection and identification of internal defects in hardwood logs using computed tomography (CT) images. We have developed a system that employs artificial neural networks to perform tentative classification of logs on a pixel-by-pixel basis. This approach achieves a high level of classification accuracy for several hardwood species (...

  17. Automatic detection and segmentation of vascular structures in dermoscopy images using a novel vesselness measure based on pixel redness and tubularness

    NASA Astrophysics Data System (ADS)

    Kharazmi, Pegah; Lui, Harvey; Stoecker, William V.; Lee, Tim

    2015-03-01

    Vascular structures are one of the most important features in the diagnosis and assessment of skin disorders. The presence and clinical appearance of vascular structures in skin lesions is a discriminating factor among different skin diseases. In this paper, we address the problem of segmentation of vascular patterns in dermoscopy images. Our proposed method is composed of three parts. First, based on biological properties of human skin, we decompose the skin to melanin and hemoglobin component using independent component analysis of skin color images. The relative quantities and pure color densities of each component were then estimated. Subsequently, we obtain three reference vectors of the mean RGB values for normal skin, pigmented skin and blood vessels from the hemoglobin component by averaging over 100000 pixels of each group outlined by an expert. Based on the Euclidean distance thresholding, we generate a mask image that extracts the red regions of the skin. Finally, Frangi measure was applied to the extracted red areas to segment the tubular structures. Finally, Otsu's thresholding was applied to segment the vascular structures and get a binary vessel mask image. The algorithm was implemented on a set of 50 dermoscopy images. In order to evaluate the performance of our method, we have artificially extended some of the existing vessels in our dermoscopy data set and evaluated the performance of the algorithm to segment the newly added vessel pixels. A sensitivity of 95% and specificity of 87% were achieved.

  18. Mapping ecological states in a complex environment

    NASA Astrophysics Data System (ADS)

    Steele, C. M.; Bestelmeyer, B.; Burkett, L. M.; Ayers, E.; Romig, K.; Slaughter, A.

    2013-12-01

    The vegetation of northern Chihuahuan Desert rangelands is sparse, heterogeneous and for most of the year, consists of a large proportion of non-photosynthetic material. The soils in this area are spectrally bright and variable in their reflectance properties. Both factors provide challenges to the application of remote sensing for estimating canopy variables (e.g., leaf area index, biomass, percentage canopy cover, primary production). Additionally, with reference to current paradigms of rangeland health assessment, remotely-sensed estimates of canopy variables have limited practical use to the rangeland manager if they are not placed in the context of ecological site and ecological state. To address these challenges, we created a multifactor classification system based on the USDA-NRCS ecological site schema and associated state-and-transition models to map ecological states on desert rangelands in southern New Mexico. Applying this system using per-pixel image processing techniques and multispectral, remotely sensed imagery raised other challenges. Per-pixel image classification relies upon the spectral information in each pixel alone, there is no reference to the spatial context of the pixel and its relationship with its neighbors. Ecological state classes may have direct relevance to managers but the non-unique spectral properties of different ecological state classes in our study area means that per-pixel classification of multispectral data performs poorly in discriminating between different ecological states. We found that image interpreters who are familiar with the landscape and its associated ecological site descriptions perform better than per-pixel classification techniques in assigning ecological states. However, two important issues affect manual classification methods: subjectivity of interpretation and reproducibility of results. An alternative to per-pixel classification and manual interpretation is object-based image analysis. Object-based image analysis provides a platform for classification that more closely resembles human recognition of objects within a remotely sensed image. The analysis presented here compares multiple thematic maps created for test locations on the USDA-ARS Jornada Experimental Range ranch. Three study sites in different pastures, each 300 ha in size, were selected for comparison on the basis of their ecological site type (';Clayey', ';Sandy' and a combination of both) and the degree of complexity of vegetation cover. Thematic maps were produced for each study site using (i) manual interpretation of digital aerial photography (by five independent interpreters); (ii) object-oriented, decision-tree classification of fine and moderate spatial resolution imagery (Quickbird; Landsat Thematic Mapper) and (iii) ground survey. To identify areas of uncertainty, we compared agreement in location, areal extent and class assignation between 5 independently produced, manually-digitized ecological state maps and with the map created from ground survey. Location, areal extent and class assignation of the map produced by object-oriented classification was also assessed with reference to the ground survey map.

  19. Object-based land cover classification based on fusion of multifrequency SAR data and THAICHOTE optical imagery

    NASA Astrophysics Data System (ADS)

    Sukawattanavijit, Chanika; Srestasathiern, Panu

    2017-10-01

    Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.

  20. Efficiency of the spectral-spatial classification of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Borzov, S. M.; Potaturkin, O. I.

    2017-01-01

    The efficiency of methods of the spectral-spatial classification of similarly looking types of vegetation on the basis of hyperspectral data of remote sensing of the Earth, which take into account local neighborhoods of analyzed image pixels, is experimentally studied. Algorithms that involve spatial pre-processing of the raw data and post-processing of pixel-based spectral classification maps are considered. Results obtained both for a large-size hyperspectral image and for its test fragment with different methods of training set construction are reported. The classification accuracy in all cases is estimated through comparisons of ground-truth data and classification maps formed by using the compared methods. The reasons for the differences in these estimates are discussed.

  1. Testing random forest classification for identifying lava flows and mapping age groups on a single Landsat 8 image

    NASA Astrophysics Data System (ADS)

    Li, Long; Solana, Carmen; Canters, Frank; Kervyn, Matthieu

    2017-10-01

    Mapping lava flows using satellite images is an important application of remote sensing in volcanology. Several volcanoes have been mapped through remote sensing using a wide range of data, from optical to thermal infrared and radar images, using techniques such as manual mapping, supervised/unsupervised classification, and elevation subtraction. So far, spectral-based mapping applications mainly focus on the use of traditional pixel-based classifiers, without much investigation into the added value of object-based approaches and into advantages of using machine learning algorithms. In this study, Nyamuragira, characterized by a series of > 20 overlapping lava flows erupted over the last century, was used as a case study. The random forest classifier was tested to map lava flows based on pixels and objects. Image classification was conducted for the 20 individual flows and for 8 groups of flows of similar age using a Landsat 8 image and a DEM of the volcano, both at 30-meter spatial resolution. Results show that object-based classification produces maps with continuous and homogeneous lava surfaces, in agreement with the physical characteristics of lava flows, while lava flows mapped through the pixel-based classification are heterogeneous and fragmented including much "salt and pepper noise". In terms of accuracy, both pixel-based and object-based classification performs well but the former results in higher accuracies than the latter except for mapping lava flow age groups without using topographic features. It is concluded that despite spectral similarity, lava flows of contrasting age can be well discriminated and mapped by means of image classification. The classification approach demonstrated in this study only requires easily accessible image data and can be applied to other volcanoes as well if there is sufficient information to calibrate the mapping.

  2. Hyperspectral Image Classification via Kernel Sparse Representation

    DTIC Science & Technology

    2013-01-01

    classification algorithms. Moreover, the spatial coherency across neighboring pixels is also incorporated through a kernelized joint sparsity model , where...joint sparsity model , where all of the pixels within a small neighborhood are jointly represented in the feature space by selecting a few common training...hyperspectral imagery, joint spar- sity model , kernel methods, sparse representation. I. INTRODUCTION HYPERSPECTRAL imaging sensors capture images

  3. Application of LANDSAT system for improving methodology for inventory and classification of wetlands

    NASA Technical Reports Server (NTRS)

    Gilmer, D. S. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A newly developed software system for generating statistics on surface water features was tested using LANDSAT data acquired previous to 1975. This software test provided a satisfactory evaluation of the system and also allowed expansion of data base on prairie water features. The software system recognizes water on the basis of a classification algorithm. This classification is accomplished by level thresholding a single near infrared data channel. After each pixel is classified as water or nonwater, the software system then recognizes ponds or lakes as sets of contiguous pixels or as single isolated pixels in the case of very small ponds. Pixels are considered to be contiguous if they are adjacent between successive scan lines. After delineating each water feature, the software system then assigns the feature a position based upon a geographic grid system and calculates the feature's planimetric area, its perimeter, and a parameter known as the shape factor.

  4. High-speed potato grading and quality inspection based on a color vision system

    NASA Astrophysics Data System (ADS)

    Noordam, Jacco C.; Otten, Gerwoud W.; Timmermans, Toine J. M.; van Zwol, Bauke H.

    2000-03-01

    A high-speed machine vision system for the quality inspection and grading of potatoes has been developed. The vision system grades potatoes on size, shape and external defects such as greening, mechanical damages, rhizoctonia, silver scab, common scab, cracks and growth cracks. A 3-CCD line-scan camera inspects the potatoes in flight as they pass under the camera. The use of mirrors to obtain a 360-degree view of the potato and the lack of product holders guarantee a full view of the potato. To achieve the required capacity of 12 tons/hour, 11 SHARC Digital Signal Processors perform the image processing and classification tasks. The total capacity of the system is about 50 potatoes/sec. The color segmentation procedure uses Linear Discriminant Analysis (LDA) in combination with a Mahalanobis distance classifier to classify the pixels. The procedure for the detection of misshapen potatoes uses a Fourier based shape classification technique. Features such as area, eccentricity and central moments are used to discriminate between similar colored defects. Experiments with red and yellow skin-colored potatoes have shown that the system is robust and consistent in its classification.

  5. Comparison of Sub-Pixel Classification Approaches for Crop-Specific Mapping

    EPA Science Inventory

    This paper examined two non-linear models, Multilayer Perceptron (MLP) regression and Regression Tree (RT), for estimating sub-pixel crop proportions using time-series MODIS-NDVI data. The sub-pixel proportions were estimated for three major crop types including corn, soybean, a...

  6. Effects of autocorrelation upon LANDSAT classification accuracy. [Richmond, Virginia and Denver, Colorado

    NASA Technical Reports Server (NTRS)

    Craig, R. G. (Principal Investigator)

    1983-01-01

    Richmond, Virginia and Denver, Colorado were study sites in an effort to determine the effect of autocorrelation on the accuracy of a parallelopiped classifier of LANDSAT digital data. The autocorrelation was assumed to decay to insignificant levels when sampled at distances of at least ten pixels. Spectral themes developed using blocks of adjacent pixels, and using groups of pixels spaced at least 10 pixels apart were used. Effects of geometric distortions were minimized by using only pixels from the interiors of land cover sections. Accuracy was evaluated for three classes; agriculture, residential and "all other"; both type 1 and type 2 errors were evaluated by means of overall classification accuracy. All classes give comparable results. Accuracy is approximately the same in both techniques; however, the variance in accuracy is significantly higher using the themes developed from autocorrelated data. The vectors of mean spectral response were nearly identical regardless of sampling method used. The estimated variances were much larger when using autocorrelated pixels.

  7. Combinational pixel-by-pixel and object-level classifying, segmenting, and agglomerating in performing quantitative image analysis that distinguishes between healthy non-cancerous and cancerous cell nuclei and delineates nuclear, cytoplasm, and stromal material objects from stained biological tissue materials

    DOEpatents

    Boucheron, Laura E

    2013-07-16

    Quantitative object and spatial arrangement-level analysis of tissue are detailed using expert (pathologist) input to guide the classification process. A two-step method is disclosed for imaging tissue, by classifying one or more biological materials, e.g. nuclei, cytoplasm, and stroma, in the tissue into one or more identified classes on a pixel-by-pixel basis, and segmenting the identified classes to agglomerate one or more sets of identified pixels into segmented regions. Typically, the one or more biological materials comprises nuclear material, cytoplasm material, and stromal material. The method further allows a user to markup the image subsequent to the classification to re-classify said materials. The markup is performed via a graphic user interface to edit designated regions in the image.

  8. Virus based Full Colour Pixels using a Microheater

    NASA Astrophysics Data System (ADS)

    Kim, Won-Geun; Kim, Kyujung; Ha, Sung-Hun; Song, Hyerin; Yu, Hyun-Woo; Kim, Chuntae; Kim, Jong-Man; Oh, Jin-Woo

    2015-09-01

    Mimicking natural structures has been received considerable attentions, and there have been a few practical advances. Tremendous efforts based on a self-assembly technique have been contributed to the development of the novel photonic structures which are mimicking nature’s inventions. We emulate the photonic structures from an origin of colour generation of mammalian skins and avian skin/feathers using M13 phage. The structures can be generated a full range of RGB colours that can be sensitively switched by temperature and substrate materials. Consequently, we developed an M13 phage-based temperature-dependent actively controllable colour pixels platform on a microheater chip. Given the simplicity of the fabrication process, the low voltage requirements and cycling stability, the virus colour pixels enable us to substitute for conventional colour pixels for the development of various implantable, wearable and flexible devices in future.

  9. Spectral-Spatial Classification of Hyperspectral Images Using Hierarchical Optimization

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new spectral-spatial method for hyperspectral data classification is proposed. For a given hyperspectral image, probabilistic pixelwise classification is first applied. Then, hierarchical step-wise optimization algorithm is performed, by iteratively merging neighboring regions with the smallest Dissimilarity Criterion (DC) and recomputing class labels for new regions. The DC is computed by comparing region mean vectors, class labels and a number of pixels in the two regions under consideration. The algorithm is converged when all the pixels get involved in the region merging procedure. Experimental results are presented on two remote sensing hyperspectral images acquired by the AVIRIS and ROSIS sensors. The proposed approach improves classification accuracies and provides maps with more homogeneous regions, when compared to previously proposed classification techniques.

  10. Continuous Change Detection and Classification (CCDC) of Land Cover Using All Available Landsat Data

    NASA Astrophysics Data System (ADS)

    Zhu, Z.; Woodcock, C. E.

    2012-12-01

    A new algorithm for Continuous Change Detection and Classification (CCDC) of land cover using all available Landsat data is developed. This new algorithm is capable of detecting many kinds of land cover change as new images are collected and at the same time provide land cover maps for any given time. To better identify land cover change, a two step cloud, cloud shadow, and snow masking algorithm is used for eliminating "noisy" observations. Next, a time series model that has components of seasonality, trend, and break estimates the surface reflectance and temperature. The time series model is updated continuously with newly acquired observations. Due to the high variability in spectral response for different kinds of land cover change, the CCDC algorithm uses a data-driven threshold derived from all seven Landsat bands. When the difference between observed and predicted exceeds the thresholds three consecutive times, a pixel is identified as land cover change. Land cover classification is done after change detection. Coefficients from the time series models and the Root Mean Square Error (RMSE) from model fitting are used as classification inputs for the Random Forest Classifier (RFC). We applied this new algorithm for one Landsat scene (Path 12 Row 31) that includes all of Rhode Island as well as much of Eastern Massachusetts and parts of Connecticut. A total of 532 Landsat images acquired between 1982 and 2011 were processed. During this period, 619,924 pixels were detected to change once (91% of total changed pixels) and 60,199 pixels were detected to change twice (8% of total changed pixels). The most frequent land cover change category is from mixed forest to low density residential which occupies more than 8% of total land cover change pixels.

  11. Automated artery-venous classification of retinal blood vessels based on structural mapping method

    NASA Astrophysics Data System (ADS)

    Joshi, Vinayak S.; Garvin, Mona K.; Reinhardt, Joseph M.; Abramoff, Michael D.

    2012-03-01

    Retinal blood vessels show morphologic modifications in response to various retinopathies. However, the specific responses exhibited by arteries and veins may provide a precise diagnostic information, i.e., a diabetic retinopathy may be detected more accurately with the venous dilatation instead of average vessel dilatation. In order to analyze the vessel type specific morphologic modifications, the classification of a vessel network into arteries and veins is required. We previously described a method for identification and separation of retinal vessel trees; i.e. structural mapping. Therefore, we propose the artery-venous classification based on structural mapping and identification of color properties prominent to the vessel types. The mean and standard deviation of each of green channel intensity and hue channel intensity are analyzed in a region of interest around each centerline pixel of a vessel. Using the vector of color properties extracted from each centerline pixel, it is classified into one of the two clusters (artery and vein), obtained by the fuzzy-C-means clustering. According to the proportion of clustered centerline pixels in a particular vessel, and utilizing the artery-venous crossing property of retinal vessels, each vessel is assigned a label of an artery or a vein. The classification results are compared with the manually annotated ground truth (gold standard). We applied the proposed method to a dataset of 15 retinal color fundus images resulting in an accuracy of 88.28% correctly classified vessel pixels. The automated classification results match well with the gold standard suggesting its potential in artery-venous classification and the respective morphology analysis.

  12. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  13. Wavelet-based statistical classification of skin images acquired with reflectance confocal microscopy

    PubMed Central

    Halimi, Abdelghafour; Batatia, Hadj; Le Digabel, Jimmy; Josse, Gwendal; Tourneret, Jean Yves

    2017-01-01

    Detecting skin lentigo in reflectance confocal microscopy images is an important and challenging problem. This imaging modality has not yet been widely investigated for this problem and there are a few automatic processing techniques. They are mostly based on machine learning approaches and rely on numerous classical image features that lead to high computational costs given the very large resolution of these images. This paper presents a detection method with very low computational complexity that is able to identify the skin depth at which the lentigo can be detected. The proposed method performs multiresolution decomposition of the image obtained at each skin depth. The distribution of image pixels at a given depth can be approximated accurately by a generalized Gaussian distribution whose parameters depend on the decomposition scale, resulting in a very-low-dimension parameter space. SVM classifiers are then investigated to classify the scale parameter of this distribution allowing real-time detection of lentigo. The method is applied to 45 healthy and lentigo patients from a clinical study, where sensitivity of 81.4% and specificity of 83.3% are achieved. Our results show that lentigo is identifiable at depths between 50μm and 60μm, corresponding to the average location of the the dermoepidermal junction. This result is in agreement with the clinical practices that characterize the lentigo by assessing the disorganization of the dermoepidermal junction. PMID:29296480

  14. Classification of skin cancer images using local binary pattern and SVM classifier

    NASA Astrophysics Data System (ADS)

    Adjed, Faouzi; Faye, Ibrahima; Ababsa, Fakhreddine; Gardezi, Syed Jamal; Dass, Sarat Chandra

    2016-11-01

    In this paper, a classification method for melanoma and non-melanoma skin cancer images has been presented using the local binary patterns (LBP). The LBP computes the local texture information from the skin cancer images, which is later used to compute some statistical features that have capability to discriminate the melanoma and non-melanoma skin tissues. Support vector machine (SVM) is applied on the feature matrix for classification into two skin image classes (malignant and benign). The method achieves good classification accuracy of 76.1% with sensitivity of 75.6% and specificity of 76.7%.

  15. ISBDD Model for Classification of Hyperspectral Remote Sensing Imagery

    PubMed Central

    Li, Na; Xu, Zhaopeng; Zhao, Huijie; Huang, Xinchen; Drummond, Jane; Wang, Daming

    2018-01-01

    The diverse density (DD) algorithm was proposed to handle the problem of low classification accuracy when training samples contain interference such as mixed pixels. The DD algorithm can learn a feature vector from training bags, which comprise instances (pixels). However, the feature vector learned by the DD algorithm cannot always effectively represent one type of ground cover. To handle this problem, an instance space-based diverse density (ISBDD) model that employs a novel training strategy is proposed in this paper. In the ISBDD model, DD values of each pixel are computed instead of learning a feature vector, and as a result, the pixel can be classified according to its DD values. Airborne hyperspectral data collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor and the Push-broom Hyperspectral Imager (PHI) are applied to evaluate the performance of the proposed model. Results show that the overall classification accuracy of ISBDD model on the AVIRIS and PHI images is up to 97.65% and 89.02%, respectively, while the kappa coefficient is up to 0.97 and 0.88, respectively. PMID:29510547

  16. Toward multidisciplinary use of LANDSAT: Interfacing computerized LANDSAT analysis systems with geographic information systems

    NASA Technical Reports Server (NTRS)

    Myers, W. L.

    1981-01-01

    The LANDSAT-geographic information system (GIS) interface must summarize the results of the LANDSAT classification over the same cells that serve as geographic referencing units for the GIS, and output these summaries on a cell-by-cell basis in a form that is readable by the input routines of the GIS. The ZONAL interface for cell-oriented systems consists of two primary programs. The PIXCEL program scans the grid of cells and outputs a channel of pixels. Each pixel contains not the reflectance values but the identifier of the cell in which the center of the pixel is located. This file of pixelized cells along with the results of a pixel-by-pixel classification of the scene produced by the LANDSAT analysis system are input to the CELSUM program which then outputs a cell-by-cell summary formatted according to the requirements of the host GIS. Cross-correlation of the LANDSAT layer with the other layers in the data base is accomplished with the analysis and display facilities of the GIS.

  17. Advances in Spectral-Spatial Classification of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2012-01-01

    Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation and contrast of the spatial structures present in the image. Then the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines using the available spectral information and the extracted spatial information. Spatial post-processing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple classifier system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

  18. A coarse-to-fine approach for medical hyperspectral image classification with sparse representation

    NASA Astrophysics Data System (ADS)

    Chang, Lan; Zhang, Mengmeng; Li, Wei

    2017-10-01

    A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.

  19. Object-based classification of earthquake damage from high-resolution optical imagery using machine learning

    NASA Astrophysics Data System (ADS)

    Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene

    2016-07-01

    Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.

  20. Object-Based Classification as an Alternative Approach to the Traditional Pixel-Based Classification to Identify Potential Habitat of the Grasshopper Sparrow

    NASA Astrophysics Data System (ADS)

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.

  1. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    NASA Astrophysics Data System (ADS)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  2. A patch-based convolutional neural network for remote sensing image classification.

    PubMed

    Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di

    2017-11-01

    Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Object-oriented and pixel-based classification approach for land cover using airborne long-wave infrared hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil

    2015-01-01

    Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.

  4. Aircraft target detection algorithm based on high resolution spaceborne SAR imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing

    2018-03-01

    In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.

  5. Pneumothorax detection in chest radiographs using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Aviel; Konen, Eli; Greenspan, Hayit

    2018-02-01

    This study presents a computer assisted diagnosis system for the detection of pneumothorax (PTX) in chest radiographs based on a convolutional neural network (CNN) for pixel classification. Using a pixel classification approach allows utilization of the texture information in the local environment of each pixel while training a CNN model on millions of training patches extracted from a relatively small dataset. The proposed system uses a pre-processing step of lung field segmentation to overcome the large variability in the input images coming from a variety of imaging sources and protocols. Using a CNN classification, suspected pixel candidates are extracted within each lung segment. A postprocessing step follows to remove non-physiological suspected regions and noisy connected components. The overall percentage of suspected PTX area was used as a robust global decision for the presence of PTX in each lung. The system was trained on a set of 117 chest x-ray images with ground truth segmentations of the PTX regions. The system was tested on a set of 86 images and reached diagnosis accuracy of AUC=0.95. Overall preliminary results are promising and indicate the growing ability of CAD based systems to detect findings in medical imaging on a clinical level accuracy.

  6. Comparison of Pixel-Based and Object-Based Classification Using Parameters and Non-Parameters Approach for the Pattern Consistency of Multi Scale Landcover

    NASA Astrophysics Data System (ADS)

    Juniati, E.; Arrofiqoh, E. N.

    2017-09-01

    Information extraction from remote sensing data especially land cover can be obtained by digital classification. In practical some people are more comfortable using visual interpretation to retrieve land cover information. However, it is highly influenced by subjectivity and knowledge of interpreter, also takes time in the process. Digital classification can be done in several ways, depend on the defined mapping approach and assumptions on data distribution. The study compared several classifiers method for some data type at the same location. The data used Landsat 8 satellite imagery, SPOT 6 and Orthophotos. In practical, the data used to produce land cover map in 1:50,000 map scale for Landsat, 1:25,000 map scale for SPOT and 1:5,000 map scale for Orthophotos, but using visual interpretation to retrieve information. Maximum likelihood Classifiers (MLC) which use pixel-based and parameters approach applied to such data, and also Artificial Neural Network classifiers which use pixel-based and non-parameters approach applied too. Moreover, this study applied object-based classifiers to the data. The classification system implemented is land cover classification on Indonesia topographic map. The classification applied to data source, which is expected to recognize the pattern and to assess consistency of the land cover map produced by each data. Furthermore, the study analyse benefits and limitations the use of methods.

  7. Aggregation of Sentinel-2 time series classifications as a solution for multitemporal analysis

    NASA Astrophysics Data System (ADS)

    Lewiński, Stanislaw; Nowakowski, Artur; Malinowski, Radek; Rybicki, Marcin; Kukawska, Ewa; Krupiński, Michał

    2017-10-01

    The general aim of this work was to elaborate efficient and reliable aggregation method that could be used for creating a land cover map at a global scale from multitemporal satellite imagery. The study described in this paper presents methods for combining results of land cover/land use classifications performed on single-date Sentinel-2 images acquired at different time periods. For that purpose different aggregation methods were proposed and tested on study sites spread on different continents. The initial classifications were performed with Random Forest classifier on individual Sentinel-2 images from a time series. In the following step the resulting land cover maps were aggregated pixel by pixel using three different combinations of information on the number of occurrences of a certain land cover class within a time series and the posterior probability of particular classes resulting from the Random Forest classification. From the proposed methods two are shown superior and in most cases were able to reach or outperform the accuracy of the best individual classifications of single-date images. Moreover, the aggregations results are very stable when used on data with varying cloudiness. They also enable to reduce considerably the number of cloudy pixels in the resulting land cover map what is significant advantage for mapping areas with frequent cloud coverage.

  8. Advances in Spectral-Spatial Classification of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2012-01-01

    Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation, and contrast of the spatial structures present in the image. Then, the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines (SVMs) using the available spectral information and the extracted spatial information. Spatial postprocessing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple-classifier (MC) system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral–spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

  9. SVM Pixel Classification on Colour Image Segmentation

    NASA Astrophysics Data System (ADS)

    Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.

  10. The effect of imposing 'fractional abundance constraints' onto the multilayer perceptron for sub-pixel land cover classification

    NASA Astrophysics Data System (ADS)

    Heremans, Stien; Suykens, Johan A. K.; Van Orshoven, Jos

    2016-02-01

    To be physically interpretable, sub-pixel land cover fractions or abundances should fulfill two constraints, the Abundance Non-negativity Constraint (ANC) and the Abundance Sum-to-one Constraint (ASC). This paper focuses on the effect of imposing these constraints onto the MultiLayer Perceptron (MLP) for a multi-class sub-pixel land cover classification of a time series of low resolution MODIS-images covering the northern part of Belgium. Two constraining modes were compared, (i) an in-training approach that uses 'softmax' as the transfer function in the MLP's output layer and (ii) a post-training approach that linearly rescales the outputs of the unconstrained MLP. Our results demonstrate that the pixel-level prediction accuracy is markedly increased by the explicit enforcement, both in-training and post-training, of the ANC and the ASC. For aggregations of pixels (municipalities), the constrained perceptrons perform at least as well as their unconstrained counterparts. Although the difference in performance between the in-training and post-training approach is small, we recommend the former for integrating the fractional abundance constraints into MLPs meant for sub-pixel land cover estimation, regardless of the targeted level of spatial aggregation.

  11. An experiment in multispectral, multitemporal crop classification using relaxation techniques

    NASA Technical Reports Server (NTRS)

    Davis, L. S.; Wang, C.-Y.; Xie, H.-C

    1983-01-01

    The paper describes the result of an experimental study concerning the use of probabilistic relaxation for improving pixel classification rates. Two LACIE sites were used in the study and in both cases, relaxation resulted in a marked improvement in classification rates.

  12. The effect of the atmosphere on the classification of satellite observations to identify surface features

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.; Bahethi, O. P.; Al-Abbas, A. H.

    1977-01-01

    The effect of differences in atmospheric turbidity on the classification of Landsat 1 observations of a rural scene is presented. The observations are classified by an unsupervised clustering technique. These clusters serve as a training set for use of a maximum-likelihood algorithm. The measured radiances in each of the four spectral bands are then changed by amounts measured by Landsat 1. These changes can be associated with a decrease in atmospheric turbidity by a factor of 1.3. The classification of 22% of the pixels changes as a result of the modification. The modified observations are then reclassified as an independent set. Only 3% of the pixels have a different classification than the unmodified set. Hence, if classification errors of rural areas are not to exceed 15%, a new training set has to be developed whenever the difference in turbidity between the training and test sets reaches unity.

  13. Research on a pulmonary nodule segmentation method combining fast self-adaptive FCM and classification.

    PubMed

    Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai

    2015-01-01

    The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms.

  14. Interactive classification and content-based retrieval of tissue images

    NASA Astrophysics Data System (ADS)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  15. Cloud cover analysis with Arctic Advanced Very High Resolution Radiometer data. II - Classification with spectral and textural measures

    NASA Technical Reports Server (NTRS)

    Key, J.

    1990-01-01

    The spectral and textural characteristics of polar clouds and surfaces for a 7-day summer series of AVHRR data in two Arctic locations are examined, and the results used in the development of a cloud classification procedure for polar satellite data. Since spatial coherence and texture sensitivity tests indicate that a joint spectral-textural analysis based on the same cell size is inappropriate, cloud detection with AVHRR data and surface identification with passive microwave data are first done on the pixel level as described by Key and Barry (1989). Next, cloud patterns within 250-sq-km regions are described, then the spectral and local textural characteristics of cloud patterns in the image are determined and each cloud pixel is classified by statistical methods. Results indicate that both spectral and textural features can be utilized in the classification of cloudy pixels, although spectral features are most useful for the discrimination between cloud classes.

  16. Blob-level active-passive data fusion for Benthic classification

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Kalluri, Hemanth; Mathur, Abhinav; Ramnath, Vinod; Kim, Minsu; Aitken, Jennifer; Tuell, Grady

    2012-06-01

    We extend the data fusion pixel level to the more semantically meaningful blob level, using the mean-shift algorithm to form labeled blobs having high similarity in the feature domain, and connectivity in the spatial domain. We have also developed Bhattacharyya Distance (BD) and rule-based classifiers, and have implemented these higher-level data fusion algorithms into the CZMIL Data Processing System. Applying these new algorithms to recent SHOALS and CASI data at Plymouth Harbor, Massachusetts, we achieved improved benthic classification accuracies over those produced with either single sensor, or pixel-level fusion strategies. These results appear to validate the hypothesis that classification accuracy may be generally improved by adopting higher spatial and semantic levels of fusion.

  17. Comparison between two race/skin color classifications in relation to health-related outcomes in Brazil.

    PubMed

    Travassos, Claudia; Laguardia, Josué; Marques, Priscilla M; Mota, Jurema C; Szwarcwald, Celia L

    2011-08-25

    This paper aims to compare the classification of race/skin color based on the discrete categories used by the Demographic Census of the Brazilian Institute of Geography and Statistics (IBGE) and a skin color scale with values ranging from 1 (lighter skin) to 10 (darker skin), examining whether choosing one alternative or the other can influence measures of self-evaluation of health status, health care service utilization and discrimination in the health services. This is a cross-sectional study based on data from the World Health Survey carried out in Brazil in 2003 with a sample of 5000 individuals older than 18 years. Similarities between the two classifications were evaluated by means of correspondence analysis. The effect of the two classifications on health outcomes was tested through logistic regression models for each sex, using age, educational level and ownership of consumer goods as covariables. Both measures of race/skin color represent the same race/skin color construct. The results show a tendency among Brazilians to classify their skin color in shades closer to the center of the color gradient. Women tend to classify their race/skin color as a little lighter than men in the skin color scale, an effect not observed when IBGE categories are used. With regard to health and health care utilization, race/skin color was not relevant in explaining any of them, regardless of the race/skin color classification. Lack of money and social class were the most prevalent reasons for discrimination in healthcare reported in the survey, suggesting that in Brazil the discussion about discrimination in the health care must not be restricted to racial discrimination and should also consider class-based discrimination. The study shows that the differences of the two classifications of race/skin color are small. However, the interval scale measure appeared to increase the freedom of choice of the respondent.

  18. Comparison between two race/skin color classifications in relation to health-related outcomes in Brazil

    PubMed Central

    2011-01-01

    Background This paper aims to compare the classification of race/skin color based on the discrete categories used by the Demographic Census of the Brazilian Institute of Geography and Statistics (IBGE) and a skin color scale with values ranging from 1 (lighter skin) to 10 (darker skin), examining whether choosing one alternative or the other can influence measures of self-evaluation of health status, health care service utilization and discrimination in the health services. Methods This is a cross-sectional study based on data from the World Health Survey carried out in Brazil in 2003 with a sample of 5000 individuals older than 18 years. Similarities between the two classifications were evaluated by means of correspondence analysis. The effect of the two classifications on health outcomes was tested through logistic regression models for each sex, using age, educational level and ownership of consumer goods as covariables. Results Both measures of race/skin color represent the same race/skin color construct. The results show a tendency among Brazilians to classify their skin color in shades closer to the center of the color gradient. Women tend to classify their race/skin color as a little lighter than men in the skin color scale, an effect not observed when IBGE categories are used. With regard to health and health care utilization, race/skin color was not relevant in explaining any of them, regardless of the race/skin color classification. Lack of money and social class were the most prevalent reasons for discrimination in healthcare reported in the survey, suggesting that in Brazil the discussion about discrimination in the health care must not be restricted to racial discrimination and should also consider class-based discrimination. The study shows that the differences of the two classifications of race/skin color are small. However, the interval scale measure appeared to increase the freedom of choice of the respondent. PMID:21867522

  19. North American Magazine Coverage of Skin Cancer and Recreational Tanning Before and After the WHO/IARC 2009 Classification of Indoor Tanning Devices as Carcinogenic.

    PubMed

    McWhirter, Jennifer E; Hoffman-Goetz, Laurie

    2015-09-01

    The mass media is an influential source of skin cancer information for the public. In 2009, the World Health Organization's International Agency for Research on Cancer classified UV radiation from tanning devices as carcinogenic. Our objective was to determine if media coverage of skin cancer and recreational tanning increased in volume or changed in nature after this classification. We conducted a directed content analysis on 29 North American popular magazines (2007-2012) to investigate the overall volume of articles on skin cancer and recreational tanning and, more specifically, the presence of skin cancer risk factors, UV behaviors, and early detection information in article text (n = 410) and images (n = 714). The volume of coverage on skin cancer and recreational tanning did not increase significantly after the 2009 classification of tanning beds as carcinogenic. Key-related messages, including that UV exposure is a risk factor for skin cancer and that indoor tanning should be avoided, were not reported more frequently after the classification, but the promotion of the tanned look as attractive was conveyed more often in images afterwards (p < .01). Content promoting high-SPF sunscreen use increased after the classification (p < .01), but there were no significant positive changes in the frequency of coverage of skin cancer risk factors, other UV behaviors, or early detection information over time. The classification of indoor tanning beds as carcinogenic had no significant impact on the volume or nature of skin cancer and recreational tanning coverage in magazines.

  20. PixelLearn

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph

    2006-01-01

    PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.

  1. IMPROVING THE ACCURACY OF HISTORIC SATELLITE IMAGE CLASSIFICATION BY COMBINING LOW-RESOLUTION MULTISPECTRAL DATA WITH HIGH-RESOLUTION PANCHROMATIC DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Getman, Daniel J

    2008-01-01

    Many attempts to observe changes in terrestrial systems over time would be significantly enhanced if it were possible to improve the accuracy of classifications of low-resolution historic satellite data. In an effort to examine improving the accuracy of historic satellite image classification by combining satellite and air photo data, two experiments were undertaken in which low-resolution multispectral data and high-resolution panchromatic data were combined and then classified using the ECHO spectral-spatial image classification algorithm and the Maximum Likelihood technique. The multispectral data consisted of 6 multispectral channels (30-meter pixel resolution) from Landsat 7. These data were augmented with panchromatic datamore » (15m pixel resolution) from Landsat 7 in the first experiment, and with a mosaic of digital aerial photography (1m pixel resolution) in the second. The addition of the Landsat 7 panchromatic data provided a significant improvement in the accuracy of classifications made using the ECHO algorithm. Although the inclusion of aerial photography provided an improvement in accuracy, this improvement was only statistically significant at a 40-60% level. These results suggest that once error levels associated with combining aerial photography and multispectral satellite data are reduced, this approach has the potential to significantly enhance the precision and accuracy of classifications made using historic remotely sensed data, as a way to extend the time range of efforts to track temporal changes in terrestrial systems.« less

  2. A Hierarchical Object-oriented Urban Land Cover Classification Using WorldView-2 Imagery and Airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Wu, M. F.; Sun, Z. C.; Yang, B.; Yu, S. S.

    2016-11-01

    In order to reduce the “salt and pepper” in pixel-based urban land cover classification and expand the application of fusion of multi-source data in the field of urban remote sensing, WorldView-2 imagery and airborne Light Detection and Ranging (LiDAR) data were used to improve the classification of urban land cover. An approach of object- oriented hierarchical classification was proposed in our study. The processing of proposed method consisted of two hierarchies. (1) In the first hierarchy, LiDAR Normalized Digital Surface Model (nDSM) image was segmented to objects. The NDVI, Costal Blue and nDSM thresholds were set for extracting building objects. (2) In the second hierarchy, after removing building objects, WorldView-2 fused imagery was obtained by Haze-ratio-based (HR) fusion, and was segmented. A SVM classifier was applied to generate road/parking lot, vegetation and bare soil objects. (3) Trees and grasslands were split based on an nDSM threshold (2.4 meter). The results showed that compared with pixel-based and non-hierarchical object-oriented approach, proposed method provided a better performance of urban land cover classification, the overall accuracy (OA) and overall kappa (OK) improved up to 92.75% and 0.90. Furthermore, proposed method reduced “salt and pepper” in pixel-based classification, improved the extraction accuracy of buildings based on LiDAR nDSM image segmentation, and reduced the confusion between trees and grasslands through setting nDSM threshold.

  3. Investigation of skin structures based on infrared wave parameter indirect microscopic imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Liu, Xuefeng; Xiong, Jichuan; Zhou, Lijuan

    2017-02-01

    Detailed imaging and analysis of skin structures are becoming increasingly important in modern healthcare and clinic diagnosis. Nanometer resolution imaging techniques such as SEM and AFM can cause harmful damage to the sample and cannot measure the whole skin structure from the very surface through epidermis, dermis to subcutaneous. Conventional optical microscopy has the highest imaging efficiency, flexibility in onsite applications and lowest cost in manufacturing and usage, but its image resolution is too low to be accepted for biomedical analysis. Infrared parameter indirect microscopic imaging (PIMI) uses an infrared laser as the light source due to its high transmission in skins. The polarization of optical wave through the skin sample was modulated while the variation of the optical field was observed at the imaging plane. The intensity variation curve of each pixel was fitted to extract the near field polarization parameters to form indirect images. During the through-skin light modulation and image retrieving process, the curve fitting removes the blurring scattering from neighboring pixels and keeps only the field variations related to local skin structures. By using the infrared PIMI, we can break the diffraction limit, bring the wide field optical image resolution to sub-200nm, in the meantime of taking advantage of high transmission of infrared waves in skin structures.

  4. Mapping forested wetlands in the Great Zhan River Basin through integrating optical, radar, and topographical data classification techniques.

    PubMed

    Na, X D; Zang, S Y; Wu, C S; Li, W L

    2015-11-01

    Knowledge of the spatial extent of forested wetlands is essential to many studies including wetland functioning assessment, greenhouse gas flux estimation, and wildlife suitable habitat identification. For discriminating forested wetlands from their adjacent land cover types, researchers have resorted to image analysis techniques applied to numerous remotely sensed data. While with some success, there is still no consensus on the optimal approaches for mapping forested wetlands. To address this problem, we examined two machine learning approaches, random forest (RF) and K-nearest neighbor (KNN) algorithms, and applied these two approaches to the framework of pixel-based and object-based classifications. The RF and KNN algorithms were constructed using predictors derived from Landsat 8 imagery, Radarsat-2 advanced synthetic aperture radar (SAR), and topographical indices. The results show that the objected-based classifications performed better than per-pixel classifications using the same algorithm (RF) in terms of overall accuracy and the difference of their kappa coefficients are statistically significant (p<0.01). There were noticeably omissions for forested and herbaceous wetlands based on the per-pixel classifications using the RF algorithm. As for the object-based image analysis, there were also statistically significant differences (p<0.01) of Kappa coefficient between results performed based on RF and KNN algorithms. The object-based classification using RF provided a more visually adequate distribution of interested land cover types, while the object classifications based on the KNN algorithm showed noticeably commissions for forested wetlands and omissions for agriculture land. This research proves that the object-based classification with RF using optical, radar, and topographical data improved the mapping accuracy of land covers and provided a feasible approach to discriminate the forested wetlands from the other land cover types in forestry area.

  5. Hybrid Optimization of Object-Based Classification in High-Resolution Images Using Continous ANT Colony Algorithm with Emphasis on Building Detection

    NASA Astrophysics Data System (ADS)

    Tamimi, E.; Ebadi, H.; Kiani, A.

    2017-09-01

    Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.

  6. Evaluating the Visualization of What a Deep Neural Network Has Learned.

    PubMed

    Samek, Wojciech; Binder, Alexander; Montavon, Gregoire; Lapuschkin, Sebastian; Muller, Klaus-Robert

    Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the "importance" of individual pixels with respect to the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets. Our main result is that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of the neural network performance.Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the "importance" of individual pixels with respect to the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets. Our main result is that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of the neural network performance.

  7. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  8. As-Built design specification for the CLASFYT program. [production of classification files - crop inventory

    NASA Technical Reports Server (NTRS)

    Horton, C. L. (Principal Investigator)

    1981-01-01

    The CLASFYT program is described in detail. The program produces a one-channel universal-formatted classification file. Trajectory coefficients and a composite set of tolerance values are calculated from five acquisitions of radiance values in each of the training fields corresponding to up to ten agricultural products. These coefficients and tolerance values are used to classify each pixel in the test field of the same segment to be the same agricultural product as one of the training fields, none of the products or a screened pixel.

  9. Research on a Pulmonary Nodule Segmentation Method Combining Fast Self-Adaptive FCM and Classification

    PubMed Central

    Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai

    2015-01-01

    The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms. PMID:25945120

  10. RIPARIAN CHARACTERIZATION USING SUB-PIXEL ANALYSIS OF LANDSAT TM IMAGERY FOR USE IN ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    Landuse/land cover and riparian corridor characterization for 7 major watersheds in western Ohio was accomplished using sub-pixel analysis and traditional classification techniques. Areas
    representing forest, woodland, shrub, and herbaceous vegetation were delineated using a ...

  11. Automated analysis and classification of melanocytic tumor on skin whole slide images.

    PubMed

    Xu, Hongming; Lu, Cheng; Berendt, Richard; Jha, Naresh; Mandal, Mrinal

    2018-06-01

    This paper presents a computer-aided technique for automated analysis and classification of melanocytic tumor on skin whole slide biopsy images. The proposed technique consists of four main modules. First, skin epidermis and dermis regions are segmented by a multi-resolution framework. Next, epidermis analysis is performed, where a set of epidermis features reflecting nuclear morphologies and spatial distributions is computed. In parallel with epidermis analysis, dermis analysis is also performed, where dermal cell nuclei are segmented and a set of textural and cytological features are computed. Finally, the skin melanocytic image is classified into different categories such as melanoma, nevus or normal tissue by using a multi-class support vector machine (mSVM) with extracted epidermis and dermis features. Experimental results on 66 skin whole slide images indicate that the proposed technique achieves more than 95% classification accuracy, which suggests that the technique has the potential to be used for assisting pathologists on skin biopsy image analysis and classification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Locality-preserving sparse representation-based classification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  13. Diverse Region-Based CNN for Hyperspectral Image Classification.

    PubMed

    Zhang, Mengmeng; Li, Wei; Du, Qian

    2018-06-01

    Convolutional neural network (CNN) is of great interest in machine learning and has demonstrated excellent performance in hyperspectral image classification. In this paper, we propose a classification framework, called diverse region-based CNN, which can encode semantic context-aware representation to obtain promising features. With merging a diverse set of discriminative appearance factors, the resulting CNN-based representation exhibits spatial-spectral context sensitivity that is essential for accurate pixel classification. The proposed method exploiting diverse region-based inputs to learn contextual interactional features is expected to have more discriminative power. The joint representation containing rich spectral and spatial information is then fed to a fully connected network and the label of each pixel vector is predicted by a softmax layer. Experimental results with widely used hyperspectral image data sets demonstrate that the proposed method can surpass any other conventional deep learning-based classifiers and other state-of-the-art classifiers.

  14. Improving urban land use and land cover classification from high-spatial-resolution hyperspectral imagery using contextual information

    NASA Astrophysics Data System (ADS)

    Yang, He; Ma, Ben; Du, Qian; Yang, Chenghai

    2010-08-01

    In this paper, we propose approaches to improve the pixel-based support vector machine (SVM) classification for urban land use and land cover (LULC) mapping from airborne hyperspectral imagery with high spatial resolution. Class spatial neighborhood relationship is used to correct the misclassified class pairs, such as roof and trail, road and roof. These classes may be difficult to be separated because they may have similar spectral signatures and their spatial features are not distinct enough to help their discrimination. In addition, misclassification incurred from within-class trivial spectral variation can be corrected by using pixel connectivity information in a local window so that spectrally homogeneous regions can be well preserved. Our experimental results demonstrate the efficiency of the proposed approaches in classification accuracy improvement. The overall performance is competitive to the object-based SVM classification.

  15. Automatic Detection of Blue-White Veil and Related Structures in Dermoscopy Images

    PubMed Central

    Celebi, M. Emre; Iyatomi, Hitoshi; Stoecker, William V.; Moss, Randy H.; Rabinovitz, Harold S.; Argenziano, Giuseppe; Soyer, H. Peter

    2011-01-01

    Dermoscopy is a non-invasive skin imaging technique, which permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. One of the most important features for the diagnosis of melanoma in dermoscopy images is the blue-white veil (irregular, structureless areas of confluent blue pigmentation with an overlying white “ground-glass” film). In this article, we present a machine learning approach to the detection of blue-white veil and related structures in dermoscopy images. The method involves contextual pixel classification using a decision tree classifier. The percentage of blue-white areas detected in a lesion combined with a simple shape descriptor yielded a sensitivity of 69.35% and a specificity of 89.97% on a set of 545 dermoscopy images. The sensitivity rises to 78.20% for detection of blue veil in those cases where it is a primary feature for melanoma recognition. PMID:18804955

  16. Objective assessment in digital images of skin erythema caused by radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsubara, H., E-mail: matubara@nirs.go.jp; Matsufuji, N.; Tsuji, H.

    Purpose: Skin toxicity caused by radiotherapy has been visually classified into discrete grades. The present study proposes an objective and continuous assessment method of skin erythema in digital images taken under arbitrary lighting conditions, which is the case for most clinical environments. The purpose of this paper is to show the feasibility of the proposed method. Methods: Clinical data were gathered from six patients who received carbon beam therapy for lung cancer. Skin condition was recorded using an ordinary compact digital camera under unfixed lighting conditions; a laser Doppler flowmeter was used to measure blood flow in the skin. Themore » photos and measurements were taken at 3 h, 30, and 90 days after irradiation. Images were decomposed into hemoglobin and melanin colors using independent component analysis. Pixel values in hemoglobin color images were compared with skin dose and skin blood flow. The uncertainty of the practical photographic method was also studied in nonclinical experiments. Results: The clinical data showed good linearity between skin dose, skin blood flow, and pixel value in the hemoglobin color images; their correlation coefficients were larger than 0.7. It was deduced from the nonclinical that the uncertainty due to the proposed method with photography was 15%; such an uncertainty was not critical for assessment of skin erythema in practical use. Conclusions: Feasibility of the proposed method for assessment of skin erythema using digital images was demonstrated. The numerical relationship obtained helped to predict skin erythema by artificial processing of skin images. Although the proposed method using photographs taken under unfixed lighting conditions increased the uncertainty of skin information in the images, it was shown to be powerful for the assessment of skin conditions because of its flexibility and adaptability.« less

  17. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    PubMed

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  18. Sub-Pixel Mapping of Tree Canopy, Impervious Surfaces, and Cropland in the Laurentian Great Lakes Basin Using MODIS Time-Series Data

    EPA Science Inventory

    This research examined sub-pixel land-cover classification performance for tree canopy, impervious surface, and cropland in the Laurentian Great Lakes Basin (GLB) using both timeseries MODIS (MOderate Resolution Imaging Spectroradiometer) NDVI (Normalized Difference Vegetation In...

  19. Comparison of Sub-pixel Classification Approaches for Crop-specific Mapping

    EPA Science Inventory

    The Moderate Resolution Imaging Spectroradiometer (MODIS) data has been increasingly used for crop mapping and other agricultural applications. Phenology-based classification approaches using the NDVI (Normalized Difference Vegetation Index) 16-day composite (250 m) data product...

  20. Classification of visible and infrared hyperspectral images based on image segmentation and edge-preserving filtering

    NASA Astrophysics Data System (ADS)

    Cui, Binge; Ma, Xiudan; Xie, Xiaoyun; Ren, Guangbo; Ma, Yi

    2017-03-01

    The classification of hyperspectral images with a few labeled samples is a major challenge which is difficult to meet unless some spatial characteristics can be exploited. In this study, we proposed a novel spectral-spatial hyperspectral image classification method that exploited spatial autocorrelation of hyperspectral images. First, image segmentation is performed on the hyperspectral image to assign each pixel to a homogeneous region. Second, the visible and infrared bands of hyperspectral image are partitioned into multiple subsets of adjacent bands, and each subset is merged into one band. Recursive edge-preserving filtering is performed on each merged band which utilizes the spectral information of neighborhood pixels. Third, the resulting spectral and spatial feature band set is classified using the SVM classifier. Finally, bilateral filtering is performed to remove "salt-and-pepper" noise in the classification result. To preserve the spatial structure of hyperspectral image, edge-preserving filtering is applied independently before and after the classification process. Experimental results on different hyperspectral images prove that the proposed spectral-spatial classification approach is robust and offers more classification accuracy than state-of-the-art methods when the number of labeled samples is small.

  1. Evaluation Methodology between Globalization and Localization Features Approaches for Skin Cancer Lesions Classification

    NASA Astrophysics Data System (ADS)

    Ahmed, H. M.; Al-azawi, R. J.; Abdulhameed, A. A.

    2018-05-01

    Huge efforts have been put in the developing of diagnostic methods to skin cancer disease. In this paper, two different approaches have been addressed for detection the skin cancer in dermoscopy images. The first approach uses a global method that uses global features for classifying skin lesions, whereas the second approach uses a local method that uses local features for classifying skin lesions. The aim of this paper is selecting the best approach for skin lesion classification. The dataset has been used in this paper consist of 200 dermoscopy images from Pedro Hispano Hospital (PH2). The achieved results are; sensitivity about 96%, specificity about 100%, precision about 100%, and accuracy about 97% for globalization approach while, sensitivity about 100%, specificity about 100%, precision about 100%, and accuracy about 100% for Localization Approach, these results showed that the localization approach achieved acceptable accuracy and better than globalization approach for skin cancer lesions classification.

  2. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  3. ASSESSMENT OF LANDSCAPE CHARACTERISTICS ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    EPA Science Inventory

    Landscape characteristics such as small patch size and land cover heterogeneity have been hypothesized to increase the likelihood of misclassifying pixels during thematic image classification. However, there has been a lack of empirical evidence, to support these hypotheses. This...

  4. Automatic sub-pixel coastline extraction based on spectral mixture analysis using EO-1 Hyperion data

    NASA Astrophysics Data System (ADS)

    Hong, Zhonghua; Li, Xuesu; Han, Yanling; Zhang, Yun; Wang, Jing; Zhou, Ruyan; Hu, Kening

    2018-06-01

    Many megacities (such as Shanghai) are located in coastal areas, therefore, coastline monitoring is critical for urban security and urban development sustainability. A shoreline is defined as the intersection between coastal land and a water surface and features seawater edge movements as tides rise and fall. Remote sensing techniques have increasingly been used for coastline extraction; however, traditional hard classification methods are performed only at the pixel-level and extracting subpixel accuracy using soft classification methods is both challenging and time consuming due to the complex features in coastal regions. This paper presents an automatic sub-pixel coastline extraction method (ASPCE) from high-spectral satellite imaging that performs coastline extraction based on spectral mixture analysis and, thus, achieves higher accuracy. The ASPCE method consists of three main components: 1) A Water- Vegetation-Impervious-Soil (W-V-I-S) model is first presented to detect mixed W-V-I-S pixels and determine the endmember spectra in coastal regions; 2) The linear spectral mixture unmixing technique based on Fully Constrained Least Squares (FCLS) is applied to the mixed W-V-I-S pixels to estimate seawater abundance; and 3) The spatial attraction model is used to extract the coastline. We tested this new method using EO-1 images from three coastal regions in China: the South China Sea, the East China Sea, and the Bohai Sea. The results showed that the method is accurate and robust. Root mean square error (RMSE) was utilized to evaluate the accuracy by calculating the distance differences between the extracted coastline and the digitized coastline. The classifier's performance was compared with that of the Multiple Endmember Spectral Mixture Analysis (MESMA), Mixture Tuned Matched Filtering (MTMF), Sequential Maximum Angle Convex Cone (SMACC), Constrained Energy Minimization (CEM), and one classical Normalized Difference Water Index (NDWI). The results from the three test sites indicated that the proposed ASPCE method extracted coastlines more efficiently than did the compared methods, and its coastline extraction accuracy corresponded closely to the digitized coastline, with 0.39 pixels, 0.40 pixels, and 0.35 pixels in the three test regions, showing that the ASPCE method achieves an accuracy below 12.0 m (0.40 pixels). Moreover, in the quantitative accuracy assessment for the three test sites, the ASPCE method shows the best performance in coastline extraction, achieving a 0.35 pixel-level at the Bohai Sea, China test site. Therefore, the proposed ASPCE method can extract coastline more accurately than can the hard classification methods or other spectral unmixing methods.

  5. Supervised classification of brain tissues through local multi-scale texture analysis by coupling DIR and FLAIR MR sequences

    NASA Astrophysics Data System (ADS)

    Poletti, Enea; Veronese, Elisa; Calabrese, Massimiliano; Bertoldo, Alessandra; Grisan, Enrico

    2012-02-01

    The automatic segmentation of brain tissues in magnetic resonance (MR) is usually performed on T1-weighted images, due to their high spatial resolution. T1w sequence, however, has some major downsides when brain lesions are present: the altered appearance of diseased tissues causes errors in tissues classification. In order to overcome these drawbacks, we employed two different MR sequences: fluid attenuated inversion recovery (FLAIR) and double inversion recovery (DIR). The former highlights both gray matter (GM) and white matter (WM), the latter highlights GM alone. We propose here a supervised classification scheme that does not require any anatomical a priori information to identify the 3 classes, "GM", "WM", and "background". Features are extracted by means of a local multi-scale texture analysis, computed for each pixel of the DIR and FLAIR sequences. The 9 textures considered are average, standard deviation, kurtosis, entropy, contrast, correlation, energy, homogeneity, and skewness, evaluated on a neighborhood of 3x3, 5x5, and 7x7 pixels. Hence, the total number of features associated to a pixel is 56 (9 textures x3 scales x2 sequences +2 original pixel values). The classifier employed is a Support Vector Machine with Radial Basis Function as kernel. From each of the 4 brain volumes evaluated, a DIR and a FLAIR slice have been selected and manually segmented by 2 expert neurologists, providing 1st and 2nd human reference observations which agree with an average accuracy of 99.03%. SVM performances have been assessed with a 4-fold cross-validation, yielding an average classification accuracy of 98.79%.

  6. Optimizing selection of training and auxiliary data for operational land cover classification for the LCMAP initiative

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.

    2016-12-01

    The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow probability, and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy).

  7. IMPACTS OF PATCH SIZE AND LANDSCAPE HETEROGENEITY ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    EPA Science Inventory

    Impacts of Patch Size and Landscape Heterogeneity on Thematic Image Classification Accuracy.
    Currently, most thematic accuracy assessments of classified remotely sensed images oily account for errors between the various classes employed, at particular pixels of interest, thu...

  8. 3D Spatial and Spectral Fusion of Terrestrial Hyperspectral Imagery and Lidar for Hyperspectral Image Shadow Restoration Applied to a Geologic Outcrop

    NASA Astrophysics Data System (ADS)

    Hartzell, P. J.; Glennie, C. L.; Hauser, D. L.; Okyay, U.; Khan, S.; Finnegan, D. C.

    2016-12-01

    Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from an exclusively airborne technique to terrestrial modalities. This enables high resolution 3D spatial and spectral quantification of vertical geologic structures for applications such as virtual 3D rock outcrop models for hydrocarbon reservoir analog analysis and mineral quantification in open pit mining environments. In contrast to airborne observation geometry, the vertical surfaces observed by horizontal-viewing terrestrial HSI sensors are prone to extensive topography-induced solar shadowing, which leads to reduced pixel classification accuracy or outright removal of shadowed pixels from analysis tasks. Using a precisely calibrated and registered offset cylindrical linear array camera model, we demonstrate the use of 3D lidar data for sub-pixel HSI shadow detection and the restoration of the shadowed pixel spectra via empirical methods that utilize illuminated and shadowed pixels of similar material composition. We further introduce a new HSI shadow restoration technique that leverages collocated backscattered lidar intensity, which is resistant to solar conditions, obtained by projecting the 3D lidar points through the HSI camera model into HSI pixel space. Using ratios derived from the overlapping lidar laser and HSI wavelengths, restored shadow pixel spectra are approximated using a simple scale factor. Simulations of multiple lidar wavelengths, i.e., multi-spectral lidar, indicate the potential for robust HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance is quantified through HSI pixel classification consistency between full sun and partial sun exposures of a single geologic outcrop.

  9. Sensitivity of geographic information system outputs to errors in remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.

    1981-01-01

    The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.

  10. Validation of a new classification system for skin tears.

    PubMed

    LeBlanc, Kimberly; Baranoski, Sharon; Holloway, Samantha; Langemo, Diane

    2013-06-01

    The aim of this study was to validate and establish reliability of the International Skin Tear classification system. A consensus panel of 12 internationally recognized key opinion leaders convened in 2011 to establish consensus statements on the prevention, prediction, assessment, and treatment of skin tears. Subsequently, a new skin tear classification system was proposed. The system was then tested for interrater and intrarater reliability between the experts before being tested more widely on a sample of 327 individuals from the United States, Canada, and Europe. The results of the study indicated a substantial level of agreement for the expert panel (Fleiss κ = 0.619; 2-month follow-up = 0.653). Intrarater reliability was high (Cohen κ = 0.877). Interrater reliability was moderate (Fleiss κ = 0.555) for healthcare professionals (n = 303) and fair for non-health professionals (Fleiss κ = 0.338; n = 24). This international study established the reliability and validity of a new classification system for skin tears.

  11. Modification of the random forest algorithm to avoid statistical dependence problems when classifying remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando

    2017-06-01

    Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.

  12. Soccer player recognition by pixel classification in a hybrid color space

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, Nicolas; Macaire, Ludovic; Postaire, Jack-Gerard

    1997-08-01

    Soccer is a very popular sport all over the world, Coaches and sport commentators need accurate information about soccer games, especially about the players behavior. These information can be gathered by inspectors who watch the soccer match and report manually the actions of the players involved in the principal phases of the game. Generally, these inspectors focus their attention on the few players standing near the ball and don't report about the motion of all the other players. So it seems desirable to design a system which automatically tracks all the players in real- time. That's why we propose to automatically track each player through the successive color images of the sequences acquired by a fixed color camera. Each player which is present in the image, is modelized by an active contour model or snake. When, during the soccer match, a player is hidden by another, the snakes which track these two players merge. So, it becomes impossible to track the players, except if the snakes are interactively re-initialized. Fortunately, in most cases, the two players don't belong to the same team. That is why we present an algorithm which recognizes the teams of the players by pixels representing the soccer ground which must be withdrawn before considering the players themselves. To eliminate these pixels, the color characteristics of the ground are determined interactively. In a second step, dealing with windows containing only one player of one team, the color features which yield the best discrimination between the two teams are selected. Thanks to these color features, the pixels associated to the players of the two teams form two separated clusters into a color space. In fact, there are many color representation systems and it's interesting to evaluate the features which provide the best separation between the two classes of pixels according to the players soccer suit. Finally, the classification process for image segmentation is based on the three most discriminating color features which define the coordinates of each pixel in an 'hybrid color space.' Thanks to this hybrid color representation, each pixel can be assigned to one of the two classes by a minimum distance classification.

  13. Non-parametric analysis of LANDSAT maps using neural nets and parallel computers

    NASA Technical Reports Server (NTRS)

    Salu, Yehuda; Tilton, James

    1991-01-01

    Nearest neighbor approaches and a new neural network, the Binary Diamond, are used for the classification of images of ground pixels obtained by LANDSAT satellite. The performances are evaluated by comparing classifications of a scene in the vicinity of Washington DC. The problem of optimal selection of categories is addressed as a step in the classification process.

  14. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio

    2008-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.

  15. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio

    2009-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716

  16. Extraction and Analysis of Mega Cities’ Impervious Surface on Pixel-based and Object-oriented Support Vector Machine Classification Technology: A case of Bombay

    NASA Astrophysics Data System (ADS)

    Yu, S. S.; Sun, Z. C.; Sun, L.; Wu, M. F.

    2017-02-01

    The object of this paper is to study the impervious surface extraction method using remote sensing imagery and monitor the spatiotemporal changing patterns of mega cities. Megacity Bombay was selected as the interesting area. Firstly, the pixel-based and object-oriented support vector machine (SVM) classification methods were used to acquire the land use/land cover (LULC) products of Bombay in 2010. Consequently, the overall accuracy (OA) and overall Kappa (OK) of the pixel-based method were 94.97% and 0.96 with a running time of 78 minutes, the OA and OK of the object-oriented method were 93.72% and 0.94 with a running time of only 17s. Additionally, OA and OK of the object-oriented method after a post-classification were improved up to 95.8% and 0.94. Then, the dynamic impervious surfaces of Bombay in the period 1973-2015 were extracted and the urbanization pattern of Bombay was analysed. Results told that both the two SVM classification methods could accomplish the impervious surface extraction, but the object-oriented method should be a better choice. Urbanization of Bombay experienced a fast extending during the past 42 years, implying a dramatically urban sprawl of mega cities in the developing countries along the One Belt and One Road (OBOR).

  17. A new method for skin color enhancement

    NASA Astrophysics Data System (ADS)

    Zeng, Huanzhao; Luo, Ronnier

    2012-01-01

    Skin tone is the most important color category in memory colors. Reproducing it pleasingly is an important factor in photographic color reproduction. Moving skin colors toward their preferred skin color center improves the skin color preference on photographic color reproduction. Two key factors to successfully enhance skin colors are: a method to detect original skin colors effectively even if they are shifted far away from the regular skin color region, and a method to morph skin colors toward a preferred skin color region properly without introducing artifacts. A method for skin color enhancement presented by the authors in the same conference last year applies a static skin color model for skin color detection, which may miss to detect skin colors that are far away from regular skin tones. In this paper, a new method using the combination of face detection and statistical skin color modeling is proposed to effectively detect skin pixels and to enhance skin colors more effectively.

  18. TEMPORAL CORRELATION OF CLASSIFICATIONS IN REMOTE SENSING

    EPA Science Inventory

    A bivariate binary model is developed for estimating the change in land cover from satellite images obtained at two different times. The binary classifications of a pixel at the two times are modeled as potentially correlated random variables, conditional on the true states of th...

  19. IMPACTS OF PATCH SIZE AND LAND COVER HETEROGENEITY ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    EPA Science Inventory


    Landscape characteristics such as small patch size and land cover heterogeneity have been hypothesized to increase the likelihood of miss-classifying pixels during thematic image classification. However, there has been a lack of empirical evidence to support these hypotheses,...

  20. A minimum spanning forest based classification method for dedicated breast CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei, E-mail: bfei@emory.edu

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting modelmore » used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging.« less

  1. Microcomputer-based classification of environmental data in municipal areas

    NASA Astrophysics Data System (ADS)

    Thiergärtner, H.

    1995-10-01

    Multivariate data-processing methods used in mineral resource identification can be used to classify urban regions. Using elements of expert systems, geographical information systems, as well as known classification and prognosis systems, it is possible to outline a single model that consists of resistant and of temporary parts of a knowledge base including graphical input and output treatment and of resistant and temporary elements of a bank of methods and algorithms. Whereas decision rules created by experts will be stored in expert systems directly, powerful classification rules in form of resistant but latent (implicit) decision algorithms may be implemented in the suggested model. The latent functions will be transformed into temporary explicit decision rules by learning processes depending on the actual task(s), parameter set(s), pixels selection(s), and expert control(s). This takes place both at supervised and nonsupervised classification of multivariately described pixel sets representing municipal subareas. The model is outlined briefly and illustrated by results obtained in a target area covering a part of the city of Berlin (Germany).

  2. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  3. Non-invasive, transdermal, path-selective and specific glucose monitoring via a graphene-based platform

    NASA Astrophysics Data System (ADS)

    Lipani, Luca; Dupont, Bertrand G. R.; Doungmene, Floriant; Marken, Frank; Tyrrell, Rex M.; Guy, Richard H.; Ilie, Adelina

    2018-06-01

    Currently, there is no available needle-free approach for diabetics to monitor glucose levels in the interstitial fluid. Here, we report a path-selective, non-invasive, transdermal glucose monitoring system based on a miniaturized pixel array platform (realized either by graphene-based thin-film technology, or screen-printing). The system samples glucose from the interstitial fluid via electroosmotic extraction through individual, privileged, follicular pathways in the skin, accessible via the pixels of the array. A proof of principle using mammalian skin ex vivo is demonstrated for specific and `quantized' glucose extraction/detection via follicular pathways, and across the hypo- to hyper-glycaemic range in humans. Furthermore, the quantification of follicular and non-follicular glucose extraction fluxes is clearly shown. In vivo continuous monitoring of interstitial fluid-borne glucose with the pixel array was able to track blood sugar in healthy human subjects. This approach paves the way to clinically relevant glucose detection in diabetics without the need for invasive, finger-stick blood sampling.

  4. Hyperspectral Image Analysis for Skin Tumor Detection

    NASA Astrophysics Data System (ADS)

    Kong, Seong G.; Park, Lae-Jeong

    This chapter presents hyperspectral imaging of fluorescence for nonin-vasive detection of tumorous tissue on mouse skin. Hyperspectral imaging sensors collect two-dimensional (2D) image data of an object in a number of narrow, adjacent spectral bands. This high-resolution measurement of spectral information reveals a continuous emission spectrum for each image pixel useful for skin tumor detection. The hyperspectral image data used in this study are fluorescence intensities of a mouse sample consisting of 21 spectral bands in the visible spectrum of wavelengths ranging from 440 to 640 nm. Fluorescence signals are measured using a laser excitation source with the center wavelength of 337 nm. An acousto-optic tunable filter is used to capture individual spectral band images at a 10-nm resolution. All spectral band images are spatially registered with the reference band image at 490 nm to obtain exact pixel correspondences by compensating the offsets caused during the image capture procedure. The support vector machines with polynomial kernel functions provide decision boundaries with a maximum separation margin to classify malignant tumor and normal tissue from the observed fluorescence spectral signatures for skin tumor detection.

  5. a Novel 3d Intelligent Fuzzy Algorithm Based on Minkowski-Clustering

    NASA Astrophysics Data System (ADS)

    Toori, S.; Esmaeily, A.

    2017-09-01

    Assessing and monitoring the state of the earth surface is a key requirement for global change research. In this paper, we propose a new consensus fuzzy clustering algorithm that is based on the Minkowski distance. This research concentrates on Tehran's vegetation mass and its changes during 29 years using remote sensing technology. The main purpose of this research is to evaluate the changes in vegetation mass using a new process by combination of intelligent NDVI fuzzy clustering and Minkowski distance operation. The dataset includes the images of Landsat8 and Landsat TM, from 1989 to 2016. For each year three images of three continuous days were used to identify vegetation impact and recovery. The result was a 3D NDVI image, with one dimension for each day NDVI. The next step was the classification procedure which is a complicated process of categorizing pixels into a finite number of separate classes, based on their data values. If a pixel satisfies a certain set of standards, the pixel is allocated to the class that corresponds to those criteria. This method is less sensitive to noise and can integrate solutions from multiple samples of data or attributes for processing data in the processing industry. The result was a fuzzy one dimensional image. This image was also computed for the next 28 years. The classification was done in both specified urban and natural park areas of Tehran. Experiments showed that our method worked better in classifying image pixels in comparison with the standard classification methods.

  6. Digital classification of Landsat data for vegetation and land-cover mapping in the Blackfoot River watershed, southeastern Idaho

    USGS Publications Warehouse

    Pettinger, L.R.

    1982-01-01

    This paper documents the procedures, results, and final products of a digital analysis of Landsat data used to produce a vegetation and landcover map of the Blackfoot River watershed in southeastern Idaho. Resource classes were identified at two levels of detail: generalized Level I classes (for example, forest land and wetland) and detailed Levels II and III classes (for example, conifer forest, aspen, wet meadow, and riparian hardwoods). Training set statistics were derived using a modified clustering approach. Environmental stratification that separated uplands from lowlands improved discrimination between resource classes having similar spectral signatures. Digital classification was performed using a maximum likelihood algorithm. Classification accuracy was determined on a single-pixel basis from a random sample of 25-pixel blocks. These blocks were transferred to small-scale color-infrared aerial photographs, and the image area corresponding to each pixel was interpreted. Classification accuracy, expressed as percent agreement of digital classification and photo-interpretation results, was 83.0:t 2.1 percent (0.95 probability level) for generalized (Level I) classes and 52.2:t 2.8 percent (0.95 probability level) for detailed (Levels II and III) classes. After the classified images were geometrically corrected, two types of maps were produced of Level I and Levels II and III resource classes: color-coded maps at a 1:250,000 scale, and flatbed-plotter overlays at a 1:24,000 scale. The overlays are more useful because of their larger scale, familiar format to users, and compatibility with other types of topographic and thematic maps of the same scale.

  7. Mediterranean Land Use and Land Cover Classification Assessment Using High Spatial Resolution Data

    NASA Astrophysics Data System (ADS)

    Elhag, Mohamed; Boteva, Silvena

    2016-10-01

    Landscape fragmentation is noticeably practiced in Mediterranean regions and imposes substantial complications in several satellite image classification methods. To some extent, high spatial resolution data were able to overcome such complications. For better classification performances in Land Use Land Cover (LULC) mapping, the current research adopts different classification methods comparison for LULC mapping using Sentinel-2 satellite as a source of high spatial resolution. Both of pixel-based and an object-based classification algorithms were assessed; the pixel-based approach employs Maximum Likelihood (ML), Artificial Neural Network (ANN) algorithms, Support Vector Machine (SVM), and, the object-based classification uses the Nearest Neighbour (NN) classifier. Stratified Masking Process (SMP) that integrates a ranking process within the classes based on spectral fluctuation of the sum of the training and testing sites was implemented. An analysis of the overall and individual accuracy of the classification results of all four methods reveals that the SVM classifier was the most efficient overall by distinguishing most of the classes with the highest accuracy. NN succeeded to deal with artificial surface classes in general while agriculture area classes, and forest and semi-natural area classes were segregated successfully with SVM. Furthermore, a comparative analysis indicates that the conventional classification method yielded better accuracy results than the SMP method overall with both classifiers used, ML and SVM.

  8. Object oriented classification of high resolution data for inventory of horticultural crops

    NASA Astrophysics Data System (ADS)

    Hebbar, R.; Ravishankar, H. M.; Trivedi, S.; Subramoniam, S. R.; Uday, R.; Dadhwal, V. K.

    2014-11-01

    High resolution satellite images are associated with large variance and thus, per pixel classifiers often result in poor accuracy especially in delineation of horticultural crops. In this context, object oriented techniques are powerful and promising methods for classification. In the present study, a semi-automatic object oriented feature extraction model has been used for delineation of horticultural fruit and plantation crops using Erdas Objective Imagine. Multi-resolution data from Resourcesat LISS-IV and Cartosat-1 have been used as source data in the feature extraction model. Spectral and textural information along with NDVI were used as inputs for generation of Spectral Feature Probability (SFP) layers using sample training pixels. The SFP layers were then converted into raster objects using threshold and clump function resulting in pixel probability layer. A set of raster and vector operators was employed in the subsequent steps for generating thematic layer in the vector format. This semi-automatic feature extraction model was employed for classification of major fruit and plantations crops viz., mango, banana, citrus, coffee and coconut grown under different agro-climatic conditions. In general, the classification accuracy of about 75-80 per cent was achieved for these crops using object based classification alone and the same was further improved using minimal visual editing of misclassified areas. A comparison of on-screen visual interpretation with object oriented approach showed good agreement. It was observed that old and mature plantations were classified more accurately while young and recently planted ones (3 years or less) showed poor classification accuracy due to mixed spectral signature, wider spacing and poor stands of plantations. The results indicated the potential use of object oriented approach for classification of high resolution data for delineation of horticultural fruit and plantation crops. The present methodology is applicable at local levels and future development is focused on up-scaling the methodology for generation of fruit and plantation crop maps at regional and national level which is important for creation of database for overall horticultural crop development.

  9. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  10. Extraction of Shrimp Ponds Using Object Oriented Classification vis-a-vis Pixel Based Classification

    DTIC Science & Technology

    2004-11-01

    302 25th ACRS 2004 Chiang Mai , Thailand B-3.6 Data Processing...Proceedings of the 25th Asian Conference on Remote Sensing, Held in Chiang Mai , Thailand on 22-26 November 2004. Copyrighted; Government Purpose Rights... Chiang Mai , Thailand B-3.6 Data Processing

  11. Estimation of a cover-type change matrix from error-prone data

    Treesearch

    Steen Magnussen

    2009-01-01

    Coregistration and classification errors seriously compromise per-pixel estimates of land cover change. A more robust estimation of change is proposed in which adjacent pixels are grouped into 3x3 clusters and treated as a unit of observation. A complete change matrix is recovered in a two-step process. The diagonal elements of a change matrix are recovered from...

  12. Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification.

    PubMed

    Soares, João V B; Leandro, Jorge J G; Cesar Júnior, Roberto M; Jelinek, Herbert F; Cree, Michael J

    2006-09-01

    We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al., 2004) and STARE (Hoover et al., 2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods.

  13. A simple and effective method for filling gaps in Landsat ETM+ SLC-off images

    USGS Publications Warehouse

    Chen, Jin; Zhu, Xiaolin; Vogelmann, James E.; Gao, Feng; Jin, Suming

    2011-01-01

    The scan-line corrector (SLC) of the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor failed in 2003, resulting in about 22% of the pixels per scene not being scanned. The SLC failure has seriously limited the scientific applications of ETM+ data. While there have been a number of methods developed to fill in the data gaps, each method has shortcomings, especially for heterogeneous landscapes. Based on the assumption that the same-class neighboring pixels around the un-scanned pixels have similar spectral characteristics, and that these neighboring and un-scanned pixels exhibit similar patterns of spectral differences between dates, we developed a simple and effective method to interpolate the values of the pixels within the gaps. We refer to this method as the Neighborhood Similar Pixel Interpolator (NSPI). Simulated and actual SLC-off ETM+ images were used to assess the performance of the NSPI. Results indicate that NSPI can restore the value of un-scanned pixels very accurately, and that it works especially well in heterogeneous regions. In addition, it can work well even if there is a relatively long time interval or significant spectral changes between the input and target image. The filled images appear reasonably spatially continuous without obvious striping patterns. Supervised classification using the maximum likelihood algorithm was done on both gap-filled simulated SLC-off data and the original "gap free" data set, and it was found that classification results, including accuracies, were very comparable. This indicates that gap-filled products generated by NSPI will have relevance to the user community for various land cover applications. In addition, the simple principle and high computational efficiency of NSPI will enable processing large volumes of SLC-off ETM+ data.

  14. Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest

    NASA Astrophysics Data System (ADS)

    Feng, W.; Sui, H.; Chen, X.

    2018-04-01

    Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.

  15. Skin problems in individuals with lower-limb loss: literature review and proposed classification system.

    PubMed

    Bui, Kelly M; Raugi, Gregory J; Nguyen, Viet Q; Reiber, Gayle E

    2009-01-01

    Problems with skin integrity can disrupt daily prosthesis use and lead to decreased mobility and function in individuals with lower-limb loss. This study reviewed the literature to examine how skin problems are defined and diagnosed and to identify the prevalence and types of skin problems in individuals with lower-limb loss. We searched the literature for terms related to amputation and skin problems. We identified 777 articles. Of the articles, 90 met criteria for review of research methodology. Four clinical studies met our selection criteria. The prevalence rate of skin problems was 15% to 41%. The most commonly reported skin problems were wounds, abscesses, and blisters. Given the lack of standardized definitions of skin problems on residual limbs, we conclude this article with a system for classification.

  16. Orientation selectivity based structure for texture classification

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Lin, Weisi; Shi, Guangming; Zhang, Yazhong; Lu, Liu

    2014-10-01

    Local structure, e.g., local binary pattern (LBP), is widely used in texture classification. However, LBP is too sensitive to disturbance. In this paper, we introduce a novel structure for texture classification. Researches on cognitive neuroscience indicate that the primary visual cortex presents remarkable orientation selectivity for visual information extraction. Inspired by this, we investigate the orientation similarities among neighbor pixels, and propose an orientation selectivity based pattern for local structure description. Experimental results on texture classification demonstrate that the proposed structure descriptor is quite robust to disturbance.

  17. Use of Landsat-derived temporal profiles for corn-soybean feature extraction and classification

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; Carnes, J. G.; Austin, W. W.

    1982-01-01

    A physical model is presented, which has been derived from multitemporal-multispectral data acquired by Landsat satellites to describe the behavior and new features that are crop specific. A feasibility study over 40 sites was performed to classify the segment pixels into those of corn, soybeans, and others using the new features and a linear classifier. Results agree well with other existing methods, and it is shown the multitemporal-multispectral scanner data can be transformed into two parameters that are closely related to the target of interest and thus can be used in classification. The approach is less time intensive than other techniques and requires labeling of only pure pixels.

  18. Application of a neural network for reflectance spectrum classification

    NASA Astrophysics Data System (ADS)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  19. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield

    PubMed Central

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-01-01

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723

  20. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    PubMed

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  1. Temporal expansion of annual crop classification layers for the CONUS using the C5 decision tree classifier

    USGS Publications Warehouse

    Friesz, Aaron M.; Wylie, Bruce K.; Howard, Daniel M.

    2017-01-01

    Crop cover maps have become widely used in a range of research applications. Multiple crop cover maps have been developed to suite particular research interests. The National Agricultural Statistics Service (NASS) Cropland Data Layers (CDL) are a series of commonly used crop cover maps for the conterminous United States (CONUS) that span from 2008 to 2013. In this investigation, we sought to contribute to the availability of consistent CONUS crop cover maps by extending temporal coverage of the NASS CDL archive back eight additional years to 2000 by creating annual NASS CDL-like crop cover maps derived from a classification tree model algorithm. We used over 11 million records to train a classification tree algorithm and develop a crop classification model (CCM). The model was used to create crop cover maps for the CONUS for years 2000–2013 at 250 m spatial resolution. The CCM and the maps for years 2008–2013 were assessed for accuracy relative to resampled NASS CDLs. The CCM performed well against a withheld test data set with a model prediction accuracy of over 90%. The assessment of the crop cover maps indicated that the model performed well spatially, placing crop cover pixels within their known domains; however, the model did show a bias towards the ‘Other’ crop cover class, which caused frequent misclassifications of pixels around the periphery of large crop cover patch clusters and of pixels that form small, sparsely dispersed crop cover patches.

  2. Classification of breast cancer cytological specimen using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Żejmo, Michał; Kowal, Marek; Korbicz, Józef; Monczak, Roman

    2017-01-01

    The paper presents a deep learning approach for automatic classification of breast tumors based on fine needle cytology. The main aim of the system is to distinguish benign from malignant cases based on microscopic images. Experiment was carried out on cytological samples derived from 50 patients (25 benign cases + 25 malignant cases) diagnosed in Regional Hospital in Zielona Góra. To classify microscopic images, we used convolutional neural networks (CNN) of two types: GoogLeNet and AlexNet. Due to the very large size of images of cytological specimen (on average 200000 × 100000 pixels), they were divided into smaller patches of size 256 × 256 pixels. Breast cancer classification usually is based on morphometric features of nuclei. Therefore, training and validation patches were selected using Support Vector Machine (SVM) so that suitable amount of cell material was depicted. Neural classifiers were tuned using GPU accelerated implementation of gradient descent algorithm. Training error was defined as a cross-entropy classification loss. Classification accuracy was defined as the percentage ratio of successfully classified validation patches to the total number of validation patches. The best accuracy rate of 83% was obtained by GoogLeNet model. We observed that more misclassified patches belong to malignant cases.

  3. Cupping artifact correction and automated classification for high-resolution dedicated breast CT images.

    PubMed

    Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei

    2012-10-01

    To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors' classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors' automatic classification and manual segmentation were 91.6% ± 2.0%. A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution.

  4. Cupping artifact correction and automated classification for high-resolution dedicated breast CT images

    PubMed Central

    Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei

    2012-01-01

    Purpose: To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. Methods: The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors’ classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. Results: The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors’ automatic classification and manual segmentation were 91.6% ± 2.0%. Conclusions: A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution. PMID:23039675

  5. Determination of target detection limits in hyperspectral data using band selection and dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Gross, W.; Boehler, J.; Twizer, K.; Kedem, B.; Lenz, A.; Kneubuehler, M.; Wellig, P.; Oechslin, R.; Schilling, H.; Rotman, S.; Middelmann, W.

    2016-10-01

    Hyperspectral remote sensing data can be used for civil and military applications to robustly detect and classify target objects. High spectral resolution of hyperspectral data can compensate for the comparatively low spatial resolution, which allows for detection and classification of small targets, even below image resolution. Hyperspectral data sets are prone to considerable spectral redundancy, affecting and limiting data processing and algorithm performance. As a consequence, data reduction strategies become increasingly important, especially in view of near-real-time data analysis. The goal of this paper is to analyze different strategies for hyperspectral band selection algorithms and their effect on subpixel classification for different target and background materials. Airborne hyperspectral data is used in combination with linear target simulation procedures to create a representative amount of target-to-background ratios for evaluation of detection limits. Data from two different airborne hyperspectral sensors, AISA Eagle and Hawk, are used to evaluate transferability of band selection when using different sensors. The same target objects were recorded to compare the calculated detection limits. To determine subpixel classification results, pure pixels from the target materials are extracted and used to simulate mixed pixels with selected background materials. Target signatures are linearly combined with different background materials in varying ratios. The commonly used classification algorithms Adaptive Coherence Estimator (ACE) is used to compare the detection limit for the original data with several band selection and data reduction strategies. The evaluation of the classification results is done by assuming a fixed false alarm ratio and calculating the mean target-to-background ratio of correctly detected pixels. The results allow drawing conclusions about specific band combinations for certain target and background combinations. Additionally, generally useful wavelength ranges are determined and the optimal amount of principal components is analyzed.

  6. An assessment of commonly employed satellite-based remote sensors for mapping mangrove species in Mexico using an NDVI-based classification scheme.

    PubMed

    Valderrama-Landeros, L; Flores-de-Santiago, F; Kovacs, J M; Flores-Verdugo, F

    2017-12-14

    Optimizing the classification accuracy of a mangrove forest is of utmost importance for conservation practitioners. Mangrove forest mapping using satellite-based remote sensing techniques is by far the most common method of classification currently used given the logistical difficulties of field endeavors in these forested wetlands. However, there is now an abundance of options from which to choose in regards to satellite sensors, which has led to substantially different estimations of mangrove forest location and extent with particular concern for degraded systems. The objective of this study was to assess the accuracy of mangrove forest classification using different remotely sensed data sources (i.e., Landsat-8, SPOT-5, Sentinel-2, and WorldView-2) for a system located along the Pacific coast of Mexico. Specifically, we examined a stressed semiarid mangrove forest which offers a variety of conditions such as dead areas, degraded stands, healthy mangroves, and very dense mangrove island formations. The results indicated that Landsat-8 (30 m per pixel) had  the lowest overall accuracy at 64% and that WorldView-2 (1.6 m per pixel) had the highest at 93%. Moreover, the SPOT-5 and the Sentinel-2 classifications (10 m per pixel) were very similar having accuracies of 75 and 78%, respectively. In comparison to WorldView-2, the other sensors overestimated the extent of Laguncularia racemosa and underestimated the extent of Rhizophora mangle. When considering such type of sensors, the higher spatial resolution can be particularly important in mapping small mangrove islands that often occur in degraded mangrove systems.

  7. Efficacy measures associated to a plantar pressure based classification system in diabetic foot medicine.

    PubMed

    Deschamps, Kevin; Matricali, Giovanni Arnoldo; Desmet, Dirk; Roosen, Philip; Keijsers, Noel; Nobels, Frank; Bruyninckx, Herman; Staes, Filip

    2016-09-01

    The concept of 'classification' has, similar to many other diseases, been found to be fundamental in the field of diabetic medicine. In the current study, we aimed at determining efficacy measures of a recently published plantar pressure based classification system. Technical efficacy of the classification system was investigated by applying a high resolution, pixel-level analysis on the normalized plantar pressure pedobarographic fields of the original experimental dataset consisting of 97 patients with diabetes and 33 persons without diabetes. Clinical efficacy was assessed by considering the occurence of foot ulcers at the plantar aspect of the forefoot in this dataset. Classification efficacy was assessed by determining the classification recognition rate as well as its sensitivity and specificity using cross-validation subsets of the experimental dataset together with a novel cohort of 12 patients with diabetes. Pixel-level comparison of the four groups associated to the classification system highlighted distinct regional differences. Retrospective analysis showed the occurence of eleven foot ulcers in the experimental dataset since their gait analysis. Eight out of the eleven ulcers developed in a region of the foot which had the highest forces. Overall classification recognition rate exceeded 90% for all cross-validation subsets. Sensitivity and specificity of the four groups associated to the classification system exceeded respectively the 0.7 and 0.8 level in all cross-validation subsets. The results of the current study support the use of the novel plantar pressure based classification system in diabetic foot medicine. It may particularly serve in communication, diagnosis and clinical decision making. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Supervised pixel classification for segmenting geographic atrophy in fundus autofluorescene images

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Medioni, Gerard G.; Hernandez, Matthias; Sadda, SriniVas R.

    2014-03-01

    Age-related macular degeneration (AMD) is the leading cause of blindness in people over the age of 65. Geographic atrophy (GA) is a manifestation of the advanced or late-stage of the AMD, which may result in severe vision loss and blindness. Techniques to rapidly and precisely detect and quantify GA lesions would appear to be of important value in advancing the understanding of the pathogenesis of GA and the management of GA progression. The purpose of this study is to develop an automated supervised pixel classification approach for segmenting GA including uni-focal and multi-focal patches in fundus autofluorescene (FAF) images. The image features include region wise intensity (mean and variance) measures, gray level co-occurrence matrix measures (angular second moment, entropy, and inverse difference moment), and Gaussian filter banks. A k-nearest-neighbor (k-NN) pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. A voting binary iterative hole filling filter is then applied to fill in the small holes. Sixteen randomly chosen FAF images were obtained from sixteen subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by certified graders. Two-fold cross-validation is applied for the evaluation of the classification performance. The mean Dice similarity coefficients (DSC) between the algorithm- and manually-defined GA regions are 0.84 +/- 0.06 for one test and 0.83 +/- 0.07 for the other test and the area correlations between them are 0.99 (p < 0.05) and 0.94 (p < 0.05) respectively.

  9. Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps

    NASA Astrophysics Data System (ADS)

    Zlinszky, A.; Kania, A.

    2016-06-01

    Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.

  10. Reduced isothermal feature set for long wave infrared (LWIR) face recognition

    NASA Astrophysics Data System (ADS)

    Donoso, Ramiro; San Martín, Cesar; Hermosilla, Gabriel

    2017-06-01

    In this paper, we introduce a new concept in the thermal face recognition area: isothermal features. This consists of a feature vector built from a thermal signature that depends on the emission of the skin of the person and its temperature. A thermal signature is the appearance of the face to infrared sensors and is unique to each person. The infrared face is decomposed into isothermal regions that present the thermal features of the face. Each isothermal region is modeled as circles within a center representing the pixel of the image, and the feature vector is composed of a maximum radius of the circles at the isothermal region. This feature vector corresponds to the thermal signature of a person. The face recognition process is built using a modification of the Expectation Maximization (EM) algorithm in conjunction with a proposed probabilistic index to the classification process. Results obtained using an infrared database are compared with typical state-of-the-art techniques showing better performance, especially in uncontrolled acquisition conditions scenarios.

  11. Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes

    PubMed Central

    Berhane, Tedros M.; Lane, Charles R.; Wu, Qiusheng; Anenkhonov, Oleg A.; Chepinoga, Victor V.; Autrey, Bradley C.; Liu, Hongxing

    2018-01-01

    Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km2) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar’s chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection—which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes. PMID:29707381

  12. Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes.

    PubMed

    Berhane, Tedros M; Lane, Charles R; Wu, Qiusheng; Anenkhonov, Oleg A; Chepinoga, Victor V; Autrey, Bradley C; Liu, Hongxing

    2018-01-01

    Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km 2 ) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar's chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection-which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes.

  13. Improving urban land use and land cover classification from high-spatial-resolution hyperspectral imagery using contextual information

    USDA-ARS?s Scientific Manuscript database

    In this paper, we propose approaches to improve the pixel-based support vector machine (SVM) classification for urban land use and land cover (LULC) mapping from airborne hyperspectral imagery with high spatial resolution. Class spatial neighborhood relationship is used to correct the misclassified ...

  14. Enhanced Deforestation Mapping in North Korea using Spatial-temporal Image Fusion Method and Phenology-based Index

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Lee, D.

    2017-12-01

    North Korea (the Democratic People's Republic of Korea, DPRK) is known to have some of the most degraded forest in the world. The characteristics of forest landscape in North Korea is complex and heterogeneous, the major vegetation cover types in the forest are hillside farm, unstocked forest, natural forest, and plateau vegetation. Better classification of types in high spatial resolution of deforested areas could provide essential information for decisions about forest management priorities and restoration of deforested areas. For mapping heterogeneous vegetation covers, the phenology-based indices are helpful to overcome the reflectance value confusion that occurs when using one season images. Coarse spatial resolution images may be acquired with a high repetition rate and it is useful for analyzing phenology characteristics, but may not capture the spatial detail of the land cover mosaic of the region of interest. Previous spatial-temporal fusion methods were only capture the temporal change, or focused on both temporal change and spatial change but with low accuracy in heterogeneous landscapes and small patches. In this study, a new concept for spatial-temporal image fusion method focus on heterogeneous landscape was proposed to produce fine resolution images at both fine spatial and temporal resolution. We classified the three types of pixels between the base image and target image, the first type is only reflectance changed caused by phenology, this type of pixels supply the reflectance, shape and texture information; the second type is both reflectance and spectrum changed in some bands caused by phenology like rice paddy or farmland, this type of pixels only supply shape and texture information; the third type is reflectance and spectrum changed caused by land cover type change, this type of pixels don't provide any information because we can't know how land cover changed in target image; and each type of pixels were applied different prediction methods. Results show that both STARFM and FSDAF predicted in low accuracy in second type pixels and small patches. Classification results used spatial-temporal image fusion method proposed in this study showed overall classification accuracy of 89.38%, with corresponding kappa coefficients of 0.87.

  15. SU-E-I-59: Investigation of the Usefulness of a Standard Deviation and Mammary Gland Density as Indexes for Mammogram Classification.

    PubMed

    Takarabe, S; Yabuuchi, H; Morishita, J

    2012-06-01

    To investigate the usefulness of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high- density mammary glands region to a whole mammary glands region as features for classification of mammograms into four categories based on the ACR BI-RADS breast composition. We used 36 digital mediolateral oblique view mammograms (18 patients) approved by our IRB. These images were classified into the four categories of breast compositions by an experienced breast radiologist and the results of the classification were regarded as a gold standard. First, a whole mammary region in a breast was divided into two regions such as a high-density mammary glands region and a low/iso-density mammary glands region by using a threshold value that was obtained from the pixel values corresponding to a pectoral muscle region. Then the percentage of a high-density mammary glands region to a whole mammary glands region was calculated. In addition, as a new method, the standard deviation of pixel values in a whole mammary glands region was calculated as an index based on the intermingling of mammary glands and fats. Finally, all mammograms were classified by using the combination of the percentage of a high-density mammary glands region and the standard deviation of each image. The agreement rates of the classification between our proposed method and gold standard was 86% (31/36). This result signified that our method has the potential to classify mammograms. The combination of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high-density mammary glands region to a whole mammary glands region was available as features to classify mammograms based on the ACR BI- RADS breast composition. © 2012 American Association of Physicists in Medicine.

  16. Epithelial cancer detection by oblique-incidence optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Garcia-Uribe, Alejandro; Balareddy, Karthik C.; Zou, Jun; Wang, Kenneth K.; Duvic, Madeleine; Wang, Lihong V.

    2009-02-01

    This paper presents a study on non-invasive detection of two common epithelial cancers (skin and esophagus) based on oblique incidence diffuse reflectance spectroscopy (OIDRS). An OIDRS measurement system, which combines fiber optics and MEMS technologies, was developed. In our pilot studies, a total number of 137 cases have been measured in-vivo for skin cancer detection and a total number of 20 biopsy samples have been measured ex-vivo for esophageal cancer detection. To automatically differentiate the cancerous cases from benign ones, a statistical software classification program was also developed. An overall classification accuracy of 90% and 100% has been achieved for skin and esophageal cancer classification, respectively.

  17. Vulnerable land ecosystems classification using spatial context and spectral indices

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martín, Consuelo; Marcello, Javier

    2017-10-01

    Natural habitats are exposed to growing pressure due to intensification of land use and tourism development. Thus, obtaining information on the vegetation is necessary for conservation and management projects. In this context, remote sensing is an important tool for monitoring and managing habitats, being classification a crucial stage. The majority of image classifications techniques are based upon the pixel-based approach. An alternative is the object-based (OBIA) approach, in which a previous segmentation step merges image pixels to create objects that are then classified. Besides, improved results may be gained by incorporating additional spatial information and specific spectral indices into the classification process. The main goal of this work was to implement and assess object-based classification techniques on very-high resolution imagery incorporating spectral indices and contextual spatial information in the classification models. The study area was Teide National Park in Canary Islands (Spain) using Worldview-2 orthoready imagery. In the classification model, two common indices were selected Normalized Difference Vegetation Index (NDVI) and Optimized Soil Adjusted Vegetation Index (OSAVI), as well as two specific Worldview-2 sensor indices, Worldview Vegetation Index and Worldview Soil Index. To include the contextual information, Grey Level Co-occurrence Matrices (GLCM) were used. The classification was performed training a Support Vector Machine with sufficient and representative number of vegetation samples (Spartocytisus supranubius, Pterocephalus lasiospermus, Descurainia bourgaeana and Pinus canariensis) as well as urban, road and bare soil classes. Confusion Matrices were computed to evaluate the results from each classification model obtaining the highest overall accuracy (90.07%) combining both Worldview indices with the GLCM-dissimilarity.

  18. Gynecomastia Classification for Surgical Management: A Systematic Review and Novel Classification System.

    PubMed

    Waltho, Daniel; Hatchell, Alexandra; Thoma, Achilleas

    2017-03-01

    Gynecomastia is a common deformity of the male breast, where certain cases warrant surgical management. There are several surgical options, which vary depending on the breast characteristics. To guide surgical management, several classification systems for gynecomastia have been proposed. A systematic review was performed to (1) identify all classification systems for the surgical management of gynecomastia, and (2) determine the adequacy of these classification systems to appropriately categorize the condition for surgical decision-making. The search yielded 1012 articles, and 11 articles were included in the review. Eleven classification systems in total were ascertained, and a total of 10 unique features were identified: (1) breast size, (2) skin redundancy, (3) breast ptosis, (4) tissue predominance, (5) upper abdominal laxity, (6) breast tuberosity, (7) nipple malposition, (8) chest shape, (9) absence of sternal notch, and (10) breast skin elasticity. On average, classification systems included two or three of these features. Breast size and ptosis were the most commonly included features. Based on their review of the current classification systems, the authors believe the ideal classification system should be universal and cater to all causes of gynecomastia; be surgically useful and easy to use; and should include a comprehensive set of clinically appropriate patient-related features, such as breast size, breast ptosis, tissue predominance, and skin redundancy. None of the current classification systems appears to fulfill these criteria.

  19. New DTM Extraction Approach from Airborne Images Derived Dsm

    NASA Astrophysics Data System (ADS)

    Mousa, Y. A.; Helmholz, P.; Belton, D.

    2017-05-01

    In this work, a new filtering approach is proposed for a fully automatic Digital Terrain Model (DTM) extraction from very high resolution airborne images derived Digital Surface Models (DSMs). Our approach represents an enhancement of the existing DTM extraction algorithm Multi-directional and Slope Dependent (MSD) by proposing parameters that are more reliable for the selection of ground pixels and the pixelwise classification. To achieve this, four main steps are implemented: Firstly, 8 well-distributed scanlines are used to search for minima as a ground point within a pre-defined filtering window size. These selected ground points are stored with their positions on a 2D surface to create a network of ground points. Then, an initial DTM is created using an interpolation method to fill the gaps in the 2D surface. Afterwards, a pixel to pixel comparison between the initial DTM and the original DSM is performed utilising pixelwise classification of ground and non-ground pixels by applying a vertical height threshold. Finally, the pixels classified as non-ground are removed and the remaining holes are filled. The approach is evaluated using the Vaihingen benchmark dataset provided by the ISPRS working group III/4. The evaluation includes the comparison of our approach, denoted as Network of Ground Points (NGPs) algorithm, with the DTM created based on MSD as well as a reference DTM generated from LiDAR data. The results show that our proposed approach over performs the MSD approach.

  20. Land cover mapping at sub-pixel scales

    NASA Astrophysics Data System (ADS)

    Makido, Yasuyo Kato

    One of the biggest drawbacks of land cover mapping from remotely sensed images relates to spatial resolution, which determines the level of spatial details depicted in an image. Fine spatial resolution images from satellite sensors such as IKONOS and QuickBird are now available. However, these images are not suitable for large-area studies, since a single image is very small and therefore it is costly for large area studies. Much research has focused on attempting to extract land cover types at sub-pixel scale, and little research has been conducted concerning the spatial allocation of land cover types within a pixel. This study is devoted to the development of new algorithms for predicting land cover distribution using remote sensory imagery at sub-pixel level. The "pixel-swapping" optimization algorithm, which was proposed by Atkinson for predicting sub-pixel land cover distribution, is investigated in this study. Two limitations of this method, the arbitrary spatial range value and the arbitrary exponential model of spatial autocorrelation, are assessed. Various weighting functions, as alternatives to the exponential model, are evaluated in order to derive the optimum weighting function. Two different simulation models were employed to develop spatially autocorrelated binary class maps. In all tested models, Gaussian, Exponential, and IDW, the pixel swapping method improved classification accuracy compared with the initial random allocation of sub-pixels. However the results suggested that equal weight could be used to increase accuracy and sub-pixel spatial autocorrelation instead of using these more complex models of spatial structure. New algorithms for modeling the spatial distribution of multiple land cover classes at sub-pixel scales are developed and evaluated. Three methods are examined: sequential categorical swapping, simultaneous categorical swapping, and simulated annealing. These three methods are applied to classified Landsat ETM+ data that has been resampled to 210 meters. The result suggested that the simultaneous method can be considered as the optimum method in terms of accuracy performance and computation time. The case study employs remote sensing imagery at the following sites: tropical forests in Brazil and temperate multiple land mosaic in East China. Sub-areas for both sites are used to examine how the characteristics of the landscape affect the ability of the optimum technique. Three types of measurement: Moran's I, mean patch size (MPS), and patch size standard deviation (STDEV), are used to characterize the landscape. All results suggested that this technique could increase the classification accuracy more than traditional hard classification. The methods developed in this study can benefit researchers who employ coarse remote sensing imagery but are interested in detailed landscape information. In many cases, the satellite sensor that provides large spatial coverage has insufficient spatial detail to identify landscape patterns. Application of the super-resolution technique described in this dissertation could potentially solve this problem by providing detailed land cover predictions from the coarse resolution satellite sensor imagery.

  1. Object-based delineation and classification of alluvial fans by application of mean-shift segmentation and support vector machines

    NASA Astrophysics Data System (ADS)

    Pipaud, Isabel; Lehmkuhl, Frank

    2017-09-01

    In the field of geomorphology, automated extraction and classification of landforms is one of the most active research areas. Until the late 2000s, this task has primarily been tackled using pixel-based approaches. As these methods consider pixels and pixel neighborhoods as the sole basic entities for analysis, they cannot account for the irregular boundaries of real-world objects. Object-based analysis frameworks emerging from the field of remote sensing have been proposed as an alternative approach, and were successfully applied in case studies falling in the domains of both general and specific geomorphology. In this context, the a-priori selection of scale parameters or bandwidths is crucial for the segmentation result, because inappropriate parametrization will either result in over-segmentation or insufficient segmentation. In this study, we describe a novel supervised method for delineation and classification of alluvial fans, and assess its applicability using a SRTM 1‧‧ DEM scene depicting a section of the north-eastern Mongolian Altai, located in northwest Mongolia. The approach is premised on the application of mean-shift segmentation and the use of a one-class support vector machine (SVM) for classification. To consider variability in terms of alluvial fan dimension and shape, segmentation is performed repeatedly for different weightings of the incorporated morphometric parameters as well as different segmentation bandwidths. The final classification layer is obtained by selecting, for each real-world object, the most appropriate segmentation result according to fuzzy membership values derived from the SVM classification. Our results show that mean-shift segmentation and SVM-based classification provide an effective framework for delineation and classification of a particular landform. Variable bandwidths and terrain parameter weightings were identified as being crucial for consideration of intra-class variability, and, in turn, for a constantly high segmentation quality. Our analysis further reveals that incorporation of morphometric parameters quantifying specific morphological aspects of a landform is indispensable for developing an accurate classification scheme. Alluvial fans exhibiting accentuated composite morphologies were identified as a major challenge for automatic delineation, as they cannot be fully captured by a single segmentation run. There is, however, a high probability that this shortcoming can be overcome by enhancing the presented approach with a routine merging fan sub-entities based on their spatial relationships.

  2. Using texture analysis to improve per-pixel classification of very high resolution images for mapping plastic greenhouses

    NASA Astrophysics Data System (ADS)

    Agüera, Francisco; Aguilar, Fernando J.; Aguilar, Manuel A.

    The area occupied by plastic-covered greenhouses has undergone rapid growth in recent years, currently exceeding 500,000 ha worldwide. Due to the vast amount of input (water, fertilisers, fuel, etc.) required, and output of different agricultural wastes (vegetable, plastic, chemical, etc.), the environmental impact of this type of production system can be serious if not accompanied by sound and sustainable territorial planning. For this, the new generation of satellites which provide very high resolution imagery, such as QuickBird and IKONOS can be useful. In this study, one QuickBird and one IKONOS satellite image have been used to cover the same area under similar circumstances. The aim of this work was an exhaustive comparison of QuickBird vs. IKONOS images in land-cover detection. In terms of plastic greenhouse mapping, comparative tests were designed and implemented, each with separate objectives. Firstly, the Maximum Likelihood Classification (MLC) was applied using five different approaches combining R, G, B, NIR, and panchromatic bands. The combinations of the bands used, significantly influenced some of the indexes used to classify quality in this work. Furthermore, the quality classification of the QuickBird image was higher in all cases than that of the IKONOS image. Secondly, texture features derived from the panchromatic images at different window sizes and with different grey levels were added as a fifth band to the R, G, B, NIR images to carry out the MLC. The inclusion of texture information in the classification did not improve the classification quality. For classifications with texture information, the best accuracies were found in both images for mean and angular second moment texture parameters. The optimum window size in these texture parameters was 3×3 for IK images, while for QB images it depended on the quality index studied, but the optimum window size was around 15×15. With regard to the grey level, the optimum was 128. Thus, the optimum texture parameter depended on the main objective of the image classification. If the main classification goal is to minimize the number of pixels wrongly classified, the mean texture parameter should be used, whereas if the main classification goal is to minimize the unclassified pixels the angular second moment texture parameter should be used. On the whole, both QuickBird and IKONOS images offered promising results in classifying plastic greenhouses.

  3. Pixel-based flood mapping from SAR imagery: a comparison of approaches

    NASA Astrophysics Data System (ADS)

    Landuyt, Lisa; Van Wesemael, Alexandra; Van Coillie, Frieke M. B.; Verhoest, Niko E. C.

    2017-04-01

    Due to their all-weather, day and night capabilities, SAR sensors have been shown to be particularly suitable for flood mapping applications. Thus, they can provide spatially-distributed flood extent data which are valuable for calibrating, validating and updating flood inundation models. These models are an invaluable tool for water managers, to take appropriate measures in times of high water levels. Image analysis approaches to delineate flood extent on SAR imagery are numerous. They can be classified into two categories, i.e. pixel-based and object-based approaches. Pixel-based approaches, e.g. thresholding, are abundant and in general computationally inexpensive. However, large discrepancies between these techniques exist and often subjective user intervention is needed. Object-based approaches require more processing but allow for the integration of additional object characteristics, like contextual information and object geometry, and thus have significant potential to provide an improved classification result. As means of benchmark, a selection of pixel-based techniques is applied on a ERS-2 SAR image of the 2006 flood event of River Dee, United Kingdom. This selection comprises Otsu thresholding, Kittler & Illingworth thresholding, the Fine To Coarse segmentation algorithm and active contour modelling. The different classification results are evaluated and compared by means of several accuracy measures, including binary performance measures.

  4. Automated segmentation of geographic atrophy in fundus autofluorescence images using supervised pixel classification.

    PubMed

    Hu, Zhihong; Medioni, Gerard G; Hernandez, Matthias; Sadda, Srinivas R

    2015-01-01

    Geographic atrophy (GA) is a manifestation of the advanced or late stage of age-related macular degeneration (AMD). AMD is the leading cause of blindness in people over the age of 65 in the western world. The purpose of this study is to develop a fully automated supervised pixel classification approach for segmenting GA, including uni- and multifocal patches in fundus autofluorescene (FAF) images. The image features include region-wise intensity measures, gray-level co-occurrence matrix measures, and Gaussian filter banks. A [Formula: see text]-nearest-neighbor pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. Sixteen randomly chosen FAF images were obtained from 16 subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by a certified image reading center grader. Eight-fold cross-validation is applied to evaluate the algorithm performance. The mean overlap ratio (OR), area correlation (Pearson's [Formula: see text]), accuracy (ACC), true positive rate (TPR), specificity (SPC), positive predictive value (PPV), and false discovery rate (FDR) between the algorithm- and manually defined GA regions are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively.

  5. Spotting East African mammals in open savannah from space.

    PubMed

    Yang, Zheng; Wang, Tiejun; Skidmore, Andrew K; de Leeuw, Jan; Said, Mohammed Y; Freer, Jim

    2014-01-01

    Knowledge of population dynamics is essential for managing and conserving wildlife. Traditional methods of counting wild animals such as aerial survey or ground counts not only disturb animals, but also can be labour intensive and costly. New, commercially available very high-resolution satellite images offer great potential for accurate estimates of animal abundance over large open areas. However, little research has been conducted in the area of satellite-aided wildlife census, although computer processing speeds and image analysis algorithms have vastly improved. This paper explores the possibility of detecting large animals in the open savannah of Maasai Mara National Reserve, Kenya from very high-resolution GeoEye-1 satellite images. A hybrid image classification method was employed for this specific purpose by incorporating the advantages of both pixel-based and object-based image classification approaches. This was performed in two steps: firstly, a pixel-based image classification method, i.e., artificial neural network was applied to classify potential targets with similar spectral reflectance at pixel level; and then an object-based image classification method was used to further differentiate animal targets from the surrounding landscapes through the applications of expert knowledge. As a result, the large animals in two pilot study areas were successfully detected with an average count error of 8.2%, omission error of 6.6% and commission error of 13.7%. The results of the study show for the first time that it is feasible to perform automated detection and counting of large wild animals in open savannahs from space, and therefore provide a complementary and alternative approach to the conventional wildlife survey techniques.

  6. Adaptive skin detection based on online training

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Tang, Liang; Zhou, Jie; Rong, Gang

    2007-11-01

    Skin is a widely used cue for porn image classification. Most conventional methods are off-line training schemes. They usually use a fixed boundary to segment skin regions in the images and are effective only in restricted conditions: e.g. good lightness and unique human race. This paper presents an adaptive online training scheme for skin detection which can handle these tough cases. In our approach, skin detection is considered as a classification problem on Gaussian mixture model. For each image, human face is detected and the face color is used to establish a primary estimation of skin color distribution. Then an adaptive online training algorithm is used to find the real boundary between skin color and background color in current image. Experimental results on 450 images showed that the proposed method is more robust in general situations than the conventional ones.

  7. Benchmark of Machine Learning Methods for Classification of a SENTINEL-2 Image

    NASA Astrophysics Data System (ADS)

    Pirotti, F.; Sunar, F.; Piragnolo, M.

    2016-06-01

    Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random forests method with the highest values; kappa index ranging from 0.55 to 0.42 respectively with the most and least number pixels for training. The two neural networks (multi layered perceptron and its ensemble) and the support vector machines - with default radial basis function kernel - methods follow closely with comparable performance.

  8. Thematic accuracy of the National Land Cover Database (NLCD) 2001 land cover for Alaska

    USGS Publications Warehouse

    Selkowitz, D.J.; Stehman, S.V.

    2011-01-01

    The National Land Cover Database (NLCD) 2001 Alaska land cover classification is the first 30-m resolution land cover product available covering the entire state of Alaska. The accuracy assessment of the NLCD 2001 Alaska land cover classification employed a geographically stratified three-stage sampling design to select the reference sample of pixels. Reference land cover class labels were determined via fixed wing aircraft, as the high resolution imagery used for determining the reference land cover classification in the conterminous U.S. was not available for most of Alaska. Overall thematic accuracy for the Alaska NLCD was 76.2% (s.e. 2.8%) at Level II (12 classes evaluated) and 83.9% (s.e. 2.1%) at Level I (6 classes evaluated) when agreement was defined as a match between the map class and either the primary or alternate reference class label. When agreement was defined as a match between the map class and primary reference label only, overall accuracy was 59.4% at Level II and 69.3% at Level I. The majority of classification errors occurred at Level I of the classification hierarchy (i.e., misclassifications were generally to a different Level I class, not to a Level II class within the same Level I class). Classification accuracy was higher for more abundant land cover classes and for pixels located in the interior of homogeneous land cover patches. ?? 2011.

  9. Classification of crops across heterogeneous agricultural landscape in Kenya using AisaEAGLE imaging spectroscopy data

    NASA Astrophysics Data System (ADS)

    Piiroinen, Rami; Heiskanen, Janne; Mõttus, Matti; Pellikka, Petri

    2015-07-01

    Land use practices are changing at a fast pace in the tropics. In sub-Saharan Africa forests, woodlands and bushlands are being transformed for agricultural use to produce food for the rapidly growing population. The objective of this study was to assess the prospects of mapping the common agricultural crops in highly heterogeneous study area in south-eastern Kenya using high spatial and spectral resolution AisaEAGLE imaging spectroscopy data. Minimum noise fraction transformation was used to pack the coherent information in smaller set of bands and the data was classified with support vector machine (SVM) algorithm. A total of 35 plant species were mapped in the field and seven most dominant ones were used as classification targets. Five of the targets were agricultural crops. The overall accuracy (OA) for the classification was 90.8%. To assess the possibility of excluding the remaining 28 plant species from the classification results, 10 different probability thresholds (PT) were tried with SVM. The impact of PT was assessed with validation polygons of all 35 mapped plant species. The results showed that while PT was increased more pixels were excluded from non-target polygons than from the polygons of the seven classification targets. This increased the OA and reduced salt-and-pepper effects in the classification results. Very high spatial resolution imagery and pixel-based classification approach worked well with small targets such as maize while there was mixing of classes on the sides of the tree crowns.

  10. A Matlab Program for Textural Classification Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Leite, E. P.; de Souza, C.

    2008-12-01

    A new MATLAB code that provides tools to perform classification of textural images for applications in the Geosciences is presented. The program, here coined TEXTNN, comprises the computation of variogram maps in the frequency domain for specific lag distances in the neighborhood of a pixel. The result is then converted back to spatial domain, where directional or ominidirectional semivariograms are extracted. Feature vectors are built with textural information composed of the semivariance values at these lag distances and, moreover, with histogram measures of mean, standard deviation and weighted fill-ratio. This procedure is applied to a selected group of pixels or to all pixels in an image using a moving window. A feed- forward back-propagation Neural Network can then be designed and trained on feature vectors of predefined classes (training set). The training phase minimizes the mean-squared error on the training set. Additionally, at each iteration, the mean-squared error for every validation is assessed and a test set is evaluated. The program also calculates contingency matrices, global accuracy and kappa coefficient for the three data sets, allowing a quantitative appraisal of the predictive power of the Neural Network models. The interpreter is able to select the best model obtained from a k-fold cross-validation or to use a unique split-sample data set for classification of all pixels in a given textural image. The code is opened to the geoscientific community and is very flexible, allowing the experienced user to modify it as necessary. The performance of the algorithms and the end-user program were tested using synthetic images, orbital SAR (RADARSAT) imagery for oil seepage detection, and airborne, multi-polarimetric SAR imagery for geologic mapping. The overall results proved very promising.

  11. VizieR Online Data Catalog: SDSS-DR8 galaxies classified by WND-CHARM (Kuminski+, 2016)

    NASA Astrophysics Data System (ADS)

    Kuminski, E.; Shamir, L.

    2016-06-01

    The image analysis method used to classify the images is WND-CHARM (wndchrm; Shamir et al. 2008, BMC Source Code for Biology and Medicine, 3: 13; 2010PLSCB...6E0974S; 2013ascl.soft12002S), which first computes 2885 numerical descriptors from each SDSS image such as textures, edges, shapes), the statistical distribution of the pixel intensities, the polynomial decomposition of the image, and fractal features. These features are extracted from the raw pixels, as well as the image transforms and multi-order image transforms. See section 2 for further explanations. In a similar way than the catalog we also compiled a catalog of all objects with spectra in DR8. For each object, that catalog contains the spec ObjID, the R.A., the decl., the z, z error, the certainty of classification as elliptical, the certainty of classification as spiral, and the certainty of classification as a star. See section 3.1 for further explanations. (2 data files).

  12. Multiresolution texture analysis applied to road surface inspection

    NASA Astrophysics Data System (ADS)

    Paquis, Stephane; Legeay, Vincent; Konik, Hubert; Charrier, Jean

    1999-03-01

    Technological advances provide now the opportunity to automate the pavement distress assessment. This paper deals with an approach for achieving an automatic vision system for road surface classification. Road surfaces are composed of aggregates, which have a particular grain size distribution and a mortar matrix. From various physical properties and visual aspects, four road families are generated. We present here a tool using a pyramidal process with the assumption that regions or objects in an image rise up because of their uniform texture. Note that the aim is not to compute another statistical parameter but to include usual criteria in our method. In fact, the road surface classification uses a multiresolution cooccurrence matrix and a hierarchical process through an original intensity pyramid, where a father pixel takes the minimum gray level value of its directly linked children pixels. More precisely, only matrix diagonal is taken into account and analyzed along the pyramidal structure, which allows the classification to be made.

  13. Spectral-Spatial Shared Linear Regression for Hyperspectral Image Classification.

    PubMed

    Haoliang Yuan; Yuan Yan Tang

    2017-04-01

    Classification of the pixels in hyperspectral image (HSI) is an important task and has been popularly applied in many practical applications. Its major challenge is the high-dimensional small-sized problem. To deal with this problem, lots of subspace learning (SL) methods are developed to reduce the dimension of the pixels while preserving the important discriminant information. Motivated by ridge linear regression (RLR) framework for SL, we propose a spectral-spatial shared linear regression method (SSSLR) for extracting the feature representation. Comparing with RLR, our proposed SSSLR has the following two advantages. First, we utilize a convex set to explore the spatial structure for computing the linear projection matrix. Second, we utilize a shared structure learning model, which is formed by original data space and a hidden feature space, to learn a more discriminant linear projection matrix for classification. To optimize our proposed method, an efficient iterative algorithm is proposed. Experimental results on two popular HSI data sets, i.e., Indian Pines and Salinas demonstrate that our proposed methods outperform many SL methods.

  14. Evaluation of multiband, multitemporal, and transformed LANDSAT MSS data for land cover area estimation. [North Central Missouri

    NASA Technical Reports Server (NTRS)

    Stoner, E. R.; May, G. A.; Kalcic, M. T. (Principal Investigator)

    1981-01-01

    Sample segments of ground-verified land cover data collected in conjunction with the USDA/ESS June Enumerative Survey were merged with LANDSAT data and served as a focus for unsupervised spectral class development and accuracy assessment. Multitemporal data sets were created from single-date LANDSAT MSS acquisitions from a nominal scene covering an eleven-county area in north central Missouri. Classification accuracies for the four land cover types predominant in the test site showed significant improvement in going from unitemporal to multitemporal data sets. Transformed LANDSAT data sets did not significantly improve classification accuracies. Regression estimators yielded mixed results for different land covers. Misregistration of two LANDSAT data sets by as much and one half pixels did not significantly alter overall classification accuracies. Existing algorithms for scene-to scene overlay proved adequate for multitemporal data analysis as long as statistical class development and accuracy assessment were restricted to field interior pixels.

  15. Threshold selection for classification of MR brain images by clustering method

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita

    2015-12-01

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  16. Application of Bayesian Classification to Content-Based Data Management

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  17. Ultra-low power high-dynamic range color pixel embedding RGB to r-g chromaticity transformation

    NASA Astrophysics Data System (ADS)

    Lecca, Michela; Gasparini, Leonardo; Gottardi, Massimo

    2014-05-01

    This work describes a novel color pixel topology that converts the three chromatic components from the standard RGB space into the normalized r-g chromaticity space. This conversion is implemented with high-dynamic range and with no dc power consumption, and the auto-exposure capability of the sensor ensures to capture a high quality chromatic signal, even in presence of very bright illuminants or in the darkness. The pixel is intended to become the basic building block of a CMOS color vision sensor, targeted to ultra-low power applications for mobile devices, such as human machine interfaces, gesture recognition, face detection. The experiments show that significant improvements of the proposed pixel with respect to standard cameras in terms of energy saving and accuracy on data acquisition. An application to skin color-based description is presented.

  18. Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation.

    PubMed

    Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S

    2017-10-01

    The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A study of some nine-element decision rules. [for multispectral recognition of remote sensing

    NASA Technical Reports Server (NTRS)

    Richardson, W.

    1974-01-01

    A nine-element rule is one that makes a classification decision for each pixel based on data from that pixel and its eight immediate neighbors. Three such rules, all fast and simple to use, are defined and tested. All performed substantially better on field interiors than the best one-point rule. Qualitative results indicate that fine detail and contradictory testimony tend to be overlooked by the rules.

  20. Characterising the biophysical properties of normal and hyperkeratotic foot skin.

    PubMed

    Hashmi, Farina; Nester, Christopher; Wright, Ciaran; Newton, Veronica; Lam, Sharon

    2015-01-01

    Plantar foot skin exhibits unique biophysical properties that are distinct from skin on other areas of the body. This paper characterises, using non-invasive methods, the biophysical properties of foot skin in healthy and pathological states including xerosis, heel fissures, calluses and corns. Ninety three people participated. Skin hydration, elasticity, collagen and elastin fibre organisation and surface texture was measured from plantar calluses, corns, fissured heel skin and xerotic heel skin. Previously published criteria were applied to classify the severity of each skin lesion and differences in the biophysical properties compared between each classification. Calluses, corns, xerotic heel skin and heel fissures had significantly lower levels of hydration; less elasticity and greater surface texture than unaffected skin sites (p < 0.01). Some evidence was found for a positive correlation between hydration and elasticity data (r ≤ 0.65) at hyperkeratotic sites. Significant differences in skin properties (with the exception of texture) were noted between different classifications of skin lesion. This study provides benchmark data for healthy and different severities of pathological foot skin. These data have applications ranging from monitoring the quality of foot skin, to measuring the efficacy of therapeutic interventions.

  1. Prediction of Skin Sensitization with a Particle Swarm Optimized Support Vector Machine

    PubMed Central

    Yuan, Hua; Huang, Jianping; Cao, Chenzhong

    2009-01-01

    Skin sensitization is the most commonly reported occupational illness, causing much suffering to a wide range of people. Identification and labeling of environmental allergens is urgently required to protect people from skin sensitization. The guinea pig maximization test (GPMT) and murine local lymph node assay (LLNA) are the two most important in vivo models for identification of skin sensitizers. In order to reduce the number of animal tests, quantitative structure-activity relationships (QSARs) are strongly encouraged in the assessment of skin sensitization of chemicals. This paper has investigated the skin sensitization potential of 162 compounds with LLNA results and 92 compounds with GPMT results using a support vector machine. A particle swarm optimization algorithm was implemented for feature selection from a large number of molecular descriptors calculated by Dragon. For the LLNA data set, the classification accuracies are 95.37% and 88.89% for the training and the test sets, respectively. For the GPMT data set, the classification accuracies are 91.80% and 90.32% for the training and the test sets, respectively. The classification performances were greatly improved compared to those reported in the literature, indicating that the support vector machine optimized by particle swarm in this paper is competent for the identification of skin sensitizers. PMID:19742136

  2. Assessment of land use and land cover change using spatiotemporal analysis of landscape: case study in south of Tehran.

    PubMed

    Sabr, Abutaleb; Moeinaddini, Mazaher; Azarnivand, Hossein; Guinot, Benjamin

    2016-12-01

    In the recent years, dust storms originating from local abandoned agricultural lands have increasingly impacted Tehran and Karaj air quality. Designing and implementing mitigation plans are necessary to study land use/land cover change (LUCC). Land use/cover classification is particularly relevant in arid areas. This study aimed to map land use/cover by pixel- and object-based image classification methods, analyse landscape fragmentation and determine the effects of two different classification methods on landscape metrics. The same sets of ground data were used for both classification methods. Because accuracy of classification plays a key role in better understanding LUCC, both methods were employed. Land use/cover maps of the southwest area of Tehran city for the years 1985, 2000 and 2014 were obtained from Landsat digital images and classified into three categories: built-up, agricultural and barren lands. The results of our LUCC analysis showed that the most important changes in built-up agricultural land categories were observed in zone B (Shahriar, Robat Karim and Eslamshahr) between 1985 and 2014. The landscape metrics obtained for all categories pictured high landscape fragmentation in the study area. Despite no significant difference was evidenced between the two classification methods, the object-based classification led to an overall higher accuracy than using the pixel-based classification. In particular, the accuracy of the built-up category showed a marked increase. In addition, both methods showed similar trends in fragmentation metrics. One of the reasons is that the object-based classification is able to identify buildings, impervious surface and roads in dense urban areas, which produced more accurate maps.

  3. Rapid classification of landsat TM imagery for phase 1 stratification using the automated NDVI threshold supervised classification (ANTSC) methodology

    Treesearch

    William H. Cooke; Dennis M. Jacobs

    2002-01-01

    FIA annual inventories require rapid updating of pixel-based Phase 1 estimates. Scientists at the Southern Research Station are developing an automated methodology that uses a Normalized Difference Vegetation Index (NDVI) for identifying and eliminating problem FIA plots from the analysis. Problem plots are those that have questionable land useiland cover information....

  4. Can a Forest/Nonforest Change Map Improve the Precision of Forest Area, Volume, Growth, Removals, and Mortality Estimates?

    Treesearch

    Dale D. Gormanson; Mark H. Hansen; Ronald E. McRoberts

    2005-01-01

    In an extensive forest inventory, stratifications that use dual-date forest/nonforest classifications of Landsat Thematic Mapper data approximately 10 years apart are tested against similar classifications that use data from only one date. Alternative stratifications that further define edge strata as pixels adjacent to a forest/nonforest boundary are included in the...

  5. Going Deeper With Contextual CNN for Hyperspectral Image Classification.

    PubMed

    Lee, Hyungtae; Kwon, Heesung

    2017-10-01

    In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.

  6. Comparison of GOES Cloud Classification Algorithms Employing Explicit and Implicit Physics

    NASA Technical Reports Server (NTRS)

    Bankert, Richard L.; Mitrescu, Cristian; Miller, Steven D.; Wade, Robert H.

    2009-01-01

    Cloud-type classification based on multispectral satellite imagery data has been widely researched and demonstrated to be useful for distinguishing a variety of classes using a wide range of methods. The research described here is a comparison of the classifier output from two very different algorithms applied to Geostationary Operational Environmental Satellite (GOES) data over the course of one year. The first algorithm employs spectral channel thresholding and additional physically based tests. The second algorithm was developed through a supervised learning method with characteristic features of expertly labeled image samples used as training data for a 1-nearest-neighbor classification. The latter's ability to identify classes is also based in physics, but those relationships are embedded implicitly within the algorithm. A pixel-to-pixel comparison analysis was done for hourly daytime scenes within a region in the northeastern Pacific Ocean. Considerable agreement was found in this analysis, with many of the mismatches or disagreements providing insight to the strengths and limitations of each classifier. Depending upon user needs, a rule-based or other postprocessing system that combines the output from the two algorithms could provide the most reliable cloud-type classification.

  7. Automated training site selection for large-area remote-sensing image analysis

    NASA Astrophysics Data System (ADS)

    McCaffrey, Thomas M.; Franklin, Steven E.

    1993-11-01

    A computer program is presented to select training sites automatically from remotely sensed digital imagery. The basic ideas are to guide the image analyst through the process of selecting typical and representative areas for large-area image classifications by minimizing bias, and to provide an initial list of potential classes for which training sites are required to develop a classification scheme or to verify classification accuracy. Reducing subjectivity in training site selection is achieved by using a purely statistical selection of homogeneous sites which then can be compared to field knowledge, aerial photography, or other remote-sensing imagery and ancillary data to arrive at a final selection of sites to be used to train the classification decision rules. The selection of the homogeneous sites uses simple tests based on the coefficient of variance, the F-statistic, and the Student's i-statistic. Comparisons of site means are conducted with a linear growing list of previously located homogeneous pixels. The program supports a common pixel-interleaved digital image format and has been tested on aerial and satellite optical imagery. The program is coded efficiently in the C programming language and was developed under AIX-Unix on an IBM RISC 6000 24-bit color workstation.

  8. 7 CFR 51.1436 - Color classifications.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Color classifications. 51.1436 Section 51.1436... STANDARDS) United States Standards for Grades of Shelled Pecans Color Classifications § 51.1436 Color classifications. (a) The skin color of pecan kernels may be described in terms of the color classifications...

  9. 7 CFR 51.1436 - Color classifications.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Color classifications. 51.1436 Section 51.1436... STANDARDS) United States Standards for Grades of Shelled Pecans Color Classifications § 51.1436 Color classifications. (a) The skin color of pecan kernels may be described in terms of the color classifications...

  10. 7 CFR 51.1436 - Color classifications.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Color classifications. 51.1436 Section 51.1436... STANDARDS) United States Standards for Grades of Shelled Pecans Color Classifications § 51.1436 Color classifications. (a) The skin color of pecan kernels may be described in terms of the color classifications...

  11. Wide field imaging - I. Applications of neural networks to object detection and star/galaxy classification

    NASA Astrophysics Data System (ADS)

    Andreon, S.; Gargiulo, G.; Longo, G.; Tagliaferri, R.; Capuano, N.

    2000-12-01

    Astronomical wide-field imaging performed with new large-format CCD detectors poses data reduction problems of unprecedented scale, which are difficult to deal with using traditional interactive tools. We present here NExt (Neural Extractor), a new neural network (NN) based package capable of detecting objects and performing both deblending and star/galaxy classification in an automatic way. Traditionally, in astronomical images, objects are first distinguished from the noisy background by searching for sets of connected pixels having brightnesses above a given threshold; they are then classified as stars or as galaxies through diagnostic diagrams having variables chosen according to the astronomer's taste and experience. In the extraction step, assuming that images are well sampled, NExt requires only the simplest a priori definition of `what an object is' (i.e. it keeps all structures composed of more than one pixel) and performs the detection via an unsupervised NN, approaching detection as a clustering problem that has been thoroughly studied in the artificial intelligence literature. The first part of the NExt procedure consists of an optimal compression of the redundant information contained in the pixels via a mapping from pixel intensities to a subspace individualized through principal component analysis. At magnitudes fainter than the completeness limit, stars are usually almost indistinguishable from galaxies, and therefore the parameters characterizing the two classes do not lie in disconnected subspaces, thus preventing the use of unsupervised methods. We therefore adopted a supervised NN (i.e. a NN that first finds the rules to classify objects from examples and then applies them to the whole data set). In practice, each object is classified depending on its membership of the regions mapping the input feature space in the training set. In order to obtain an objective and reliable classification, instead of using an arbitrarily defined set of features we use a NN to select the most significant features among the large number of measured ones, and then we use these selected features to perform the classification task. In order to optimize the performance of the system, we implemented and tested several different models of NN. The comparison of the NExt performance with that of the best detection and classification package known to the authors (SExtractor) shows that NExt is at least as effective as the best traditional packages.

  12. Methods for Monitoring the Detection of Multi-Temporal Land Use Change Through the Classification of Urban Areas

    NASA Astrophysics Data System (ADS)

    Alhaddad, B. I.; Burns, M. C.; Roca, J.

    2011-08-01

    Urban areas are complicated due to the mix of man-made features and natural features. A higher level of structural information plays an important role in land cover/use classification of urban areas. Additional spatial indicators have to be extracted based on structural analysis in order to understand and identify spatial patterns or the spatial organization of features, especially for man-made feature. It's very difficult to extract such spatial patterns by using only classification approaches. Clusters of urban patterns which are integral parts of other uses may be difficult to identify. A lot of public resources have been directed towards seeking to develop a standardized classification system and to provide as much compatibility as possible to ensure the widespread use of such categorized data obtained from remote sensor sources. In this paper different methods applied to understand the change in the land use areas by understanding and monitoring the change in urban areas and as its hard to apply those methods to classification results for high elements quantities, dusts and scratches (Roca and Alhaddad, 2005). This paper focuses on a methodology developed based relation between urban elements and how to join this elements in zones or clusters have commune behaviours such as form, pattern, size. The main objective is to convert urban class category in to various structure densities depend on conjunction of pixel and shortest distance between them, Delaunay triangulation has been widely used in spatial analysis and spatial modelling. To identify these different zones, a spatial density-based clustering technique was adopted. In highly urban zones, the spatial density of the pixels is high, while in sparsely areas the density of points is much lower. Once the groups of pixels are identified, the calculation of the boundaries of the areas containing each group of pixels defines the new regions indicate the different contains inside such as high or low urban areas. Multi-temporal datasets from 1986, 1995 and 2004 used to urban region centroid to be our reference in this study which allow us to follow the urban movement, increase and decrease by the time. Kernel Density function used to Calculates urban magnitude, Voronoi algorithm is proposed for deriving explicit boundaries between objects units. To test the approach, we selected a site in a suburban area in Barcelona Municipality, the Spain.

  13. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... generally conforms to the “light” or “light amber” classification, that color classification may be used to... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be...

  14. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... generally conforms to the “light” or “light amber” classification, that color classification may be used to... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be...

  15. 7 CFR 51.1436 - Color classifications.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Color classifications. 51.1436 Section 51.1436... Classifications § 51.1436 Color classifications. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a lot generally...

  16. 7 CFR 51.1436 - Color classifications.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Color classifications. 51.1436 Section 51.1436... Classifications § 51.1436 Color classifications. (a) The skin color of pecan kernels may be described in terms of the color classifications provided in this section. When the color of kernels in a lot generally...

  17. 21 CFR 878.4730 - Surgical skin degreaser or adhesive tape solvent.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Surgical skin degreaser or adhesive tape solvent... Surgical skin degreaser or adhesive tape solvent. (a) Identification. A surgical skin degreaser or an... dissolve surface skin oil or adhesive tape. (b) Classification. Class I (general controls). The device is...

  18. Operational Tree Species Mapping in a Diverse Tropical Forest with Airborne Imaging Spectroscopy.

    PubMed

    Baldeck, Claire A; Asner, Gregory P; Martin, Robin E; Anderson, Christopher B; Knapp, David E; Kellner, James R; Wright, S Joseph

    2015-01-01

    Remote identification and mapping of canopy tree species can contribute valuable information towards our understanding of ecosystem biodiversity and function over large spatial scales. However, the extreme challenges posed by highly diverse, closed-canopy tropical forests have prevented automated remote species mapping of non-flowering tree crowns in these ecosystems. We set out to identify individuals of three focal canopy tree species amongst a diverse background of tree and liana species on Barro Colorado Island, Panama, using airborne imaging spectroscopy data. First, we compared two leading single-class classification methods--binary support vector machine (SVM) and biased SVM--for their performance in identifying pixels of a single focal species. From this comparison we determined that biased SVM was more precise and created a multi-species classification model by combining the three biased SVM models. This model was applied to the imagery to identify pixels belonging to the three focal species and the prediction results were then processed to create a map of focal species crown objects. Crown-level cross-validation of the training data indicated that the multi-species classification model had pixel-level producer's accuracies of 94-97% for the three focal species, and field validation of the predicted crown objects indicated that these had user's accuracies of 94-100%. Our results demonstrate the ability of high spatial and spectral resolution remote sensing to accurately detect non-flowering crowns of focal species within a diverse tropical forest. We attribute the success of our model to recent classification and mapping techniques adapted to species detection in diverse closed-canopy forests, which can pave the way for remote species mapping in a wider variety of ecosystems.

  19. Operational Tree Species Mapping in a Diverse Tropical Forest with Airborne Imaging Spectroscopy

    PubMed Central

    Baldeck, Claire A.; Asner, Gregory P.; Martin, Robin E.; Anderson, Christopher B.; Knapp, David E.; Kellner, James R.; Wright, S. Joseph

    2015-01-01

    Remote identification and mapping of canopy tree species can contribute valuable information towards our understanding of ecosystem biodiversity and function over large spatial scales. However, the extreme challenges posed by highly diverse, closed-canopy tropical forests have prevented automated remote species mapping of non-flowering tree crowns in these ecosystems. We set out to identify individuals of three focal canopy tree species amongst a diverse background of tree and liana species on Barro Colorado Island, Panama, using airborne imaging spectroscopy data. First, we compared two leading single-class classification methods—binary support vector machine (SVM) and biased SVM—for their performance in identifying pixels of a single focal species. From this comparison we determined that biased SVM was more precise and created a multi-species classification model by combining the three biased SVM models. This model was applied to the imagery to identify pixels belonging to the three focal species and the prediction results were then processed to create a map of focal species crown objects. Crown-level cross-validation of the training data indicated that the multi-species classification model had pixel-level producer’s accuracies of 94–97% for the three focal species, and field validation of the predicted crown objects indicated that these had user’s accuracies of 94–100%. Our results demonstrate the ability of high spatial and spectral resolution remote sensing to accurately detect non-flowering crowns of focal species within a diverse tropical forest. We attribute the success of our model to recent classification and mapping techniques adapted to species detection in diverse closed-canopy forests, which can pave the way for remote species mapping in a wider variety of ecosystems. PMID:26153693

  20. Proportion estimation and classification of mixed pixels in multispectral data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crouse, K.R.

    1979-01-01

    Remote sensing applications to crop productivity estimations are discussed with detailed instructions for developing classifier skills in multispectral data analysis for corn, soybeans, oats, and alfalfa crops. (PCS)

  1. Design of the low area monotonic trim DAC in 40 nm CMOS technology for pixel readout chips

    NASA Astrophysics Data System (ADS)

    Drozd, A.; Szczygiel, R.; Maj, P.; Satlawa, T.; Grybos, P.

    2014-12-01

    The recent research in hybrid pixel detectors working in single photon counting mode focuses on nanometer or 3D technologies which allow making pixels smaller and implementing more complex solutions in each of the pixels. Usually single pixel in readout electronics for X-ray detection comprises of charge amplifier, shaper and discriminator that allow classification of events occurring at the detector as true or false hits by comparing amplitude of the signal obtained with threshold voltage, which minimizes the influence of noise effects. However, making the pixel size smaller often causes problems with pixel to pixel uniformity and additional effects like charge sharing become more visible. To improve channel-to-channel uniformity or implement an algorithm for charge sharing effect minimization, small area trimming DACs working in each pixel independently are necessary. However, meeting the requirement of small area often results in poor linearity and even non-monotonicity. In this paper we present a novel low-area thermometer coded 6-bit DAC implemented in 40 nm CMOS technology. Monte Carlo simulations were performed on the described design proving that under all conditions designed DAC is inherently monotonic. Presented DAC was implemented in the prototype readout chip with 432 pixels working in single photon counting mode, with two trimming DACs in each pixel. Each DAC occupies the area of 8 μm × 18.5 μm. Measurements and chips' tests were performed to obtain reliable statistical results.

  2. Prostate segmentation by sparse representation based classification

    PubMed Central

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-01-01

    Purpose: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. Methods: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. Results: The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. Conclusions: The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation. PMID:23039673

  3. Prostate segmentation by sparse representation based classification.

    PubMed

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-10-01

    The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation.

  4. Balanced VS Imbalanced Training Data: Classifying Rapideye Data with Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Ustuner, M.; Sanli, F. B.; Abdikan, S.

    2016-06-01

    The accuracy of supervised image classification is highly dependent upon several factors such as the design of training set (sample selection, composition, purity and size), resolution of input imagery and landscape heterogeneity. The design of training set is still a challenging issue since the sensitivity of classifier algorithm at learning stage is different for the same dataset. In this paper, the classification of RapidEye imagery with balanced and imbalanced training data for mapping the crop types was addressed. Classification with imbalanced training data may result in low accuracy in some scenarios. Support Vector Machines (SVM), Maximum Likelihood (ML) and Artificial Neural Network (ANN) classifications were implemented here to classify the data. For evaluating the influence of the balanced and imbalanced training data on image classification algorithms, three different training datasets were created. Two different balanced datasets which have 70 and 100 pixels for each class of interest and one imbalanced dataset in which each class has different number of pixels were used in classification stage. Results demonstrate that ML and NN classifications are affected by imbalanced training data in resulting a reduction in accuracy (from 90.94% to 85.94% for ML and from 91.56% to 88.44% for NN) while SVM is not affected significantly (from 94.38% to 94.69%) and slightly improved. Our results highlighted that SVM is proven to be a very robust, consistent and effective classifier as it can perform very well under balanced and imbalanced training data situations. Furthermore, the training stage should be precisely and carefully designed for the need of adopted classifier.

  5. Computer-Aided Diagnosis of Micro-Malignant Melanoma Lesions Applying Support Vector Machines.

    PubMed

    Jaworek-Korjakowska, Joanna

    2016-01-01

    Background. One of the fatal disorders causing death is malignant melanoma, the deadliest form of skin cancer. The aim of the modern dermatology is the early detection of skin cancer, which usually results in reducing the mortality rate and less extensive treatment. This paper presents a study on classification of melanoma in the early stage of development using SVMs as a useful technique for data classification. Method. In this paper an automatic algorithm for the classification of melanomas in their early stage, with a diameter under 5 mm, has been presented. The system contains the following steps: image enhancement, lesion segmentation, feature calculation and selection, and classification stage using SVMs. Results. The algorithm has been tested on 200 images including 70 melanomas and 130 benign lesions. The SVM classifier achieved sensitivity of 90% and specificity of 96%. The results indicate that the proposed approach captured most of the malignant cases and could provide reliable information for effective skin mole examination. Conclusions. Micro-melanomas due to the small size and low advancement of development create enormous difficulties during the diagnosis even for experts. The use of advanced equipment and sophisticated computer systems can help in the early diagnosis of skin lesions.

  6. Image-classification-based global dimming algorithm for LED backlights in LCDs

    NASA Astrophysics Data System (ADS)

    Qibin, Feng; Huijie, He; Dong, Han; Lei, Zhang; Guoqiang, Lv

    2015-07-01

    Backlight dimming can help LCDs reduce power consumption and improve CR. With fixed parameters, dimming algorithm cannot achieve satisfied effects for all kinds of images. The paper introduces an image-classification-based global dimming algorithm. The proposed classification method especially for backlight dimming is based on luminance and CR of input images. The parameters for backlight dimming level and pixel compensation are adaptive with image classifications. The simulation results show that the classification based dimming algorithm presents 86.13% power reduction improvement compared with dimming without classification, with almost same display quality. The prototype is developed. There are no perceived distortions when playing videos. The practical average power reduction of the prototype TV is 18.72%, compared with common TV without dimming.

  7. Rapid Classification of Landsat TM Imagery for Phase 1 Stratification Using the Automated NDVI Threshold Supervised Classification (ANTSC) Methodology

    Treesearch

    William H. Cooke; Dennis M. Jacobs

    2005-01-01

    FIA annual inventories require rapid updating of pixel-based Phase 1 estimates. Scientists at the Southern Research Station are developing an automated methodology that uses a Normalized Difference Vegetation Index (NDVI) for identifying and eliminating problem FIA plots from the analysis. Problem plots are those that have questionable land use/land cover information....

  8. An Iterative Inference Procedure Applying Conditional Random Fields for Simultaneous Classification of Land Cover and Land Use

    NASA Astrophysics Data System (ADS)

    Albert, L.; Rottensteiner, F.; Heipke, C.

    2015-08-01

    Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.

  9. Classification with spatio-temporal interpixel class dependency contexts

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David A.

    1992-01-01

    A contextual classifier which can utilize both spatial and temporal interpixel dependency contexts is investigated. After spatial and temporal neighbors are defined, a general form of maximum a posterior spatiotemporal contextual classifier is derived. This contextual classifier is simplified under several assumptions. Joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Gibbs random field. The classification is performed in a recursive manner to allow a computationally efficient contextual classification. Experimental results with bitemporal TM data show significant improvement of classification accuracy over noncontextual pixelwise classifiers. This spatiotemporal contextual classifier should find use in many applications of remote sensing, especially when the classification accuracy is important.

  10. A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Ce; Pan, Xin; Li, Huapeng; Gardiner, Andy; Sargent, Isabel; Hare, Jonathon; Atkinson, Peter M.

    2018-06-01

    The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification.

  11. Automatic parquet block sorting using real-time spectral classification

    NASA Astrophysics Data System (ADS)

    Astrom, Anders; Astrand, Erik; Johansson, Magnus

    1999-03-01

    This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.

  12. International regulatory requirements for skin sensitization testing.

    PubMed

    Daniel, Amber B; Strickland, Judy; Allen, David; Casati, Silvia; Zuang, Valérie; Barroso, João; Whelan, Maurice; Régimbald-Krnel, M J; Kojima, Hajime; Nishikawa, Akiyoshi; Park, Hye-Kyung; Lee, Jong Kwon; Kim, Tae Sung; Delgado, Isabella; Rios, Ludmila; Yang, Ying; Wang, Gangli; Kleinstreuer, Nicole

    2018-06-01

    Skin sensitization test data are required or considered by chemical regulation authorities around the world. These data are used to develop product hazard labeling for the protection of consumers or workers and to assess risks from exposure to skin-sensitizing chemicals. To identify opportunities for regulatory uses of non-animal replacements for skin sensitization tests, the needs and uses for skin sensitization test data must first be clarified. Thus, we reviewed skin sensitization testing requirements for seven countries or regions that are represented in the International Cooperation on Alternative Test Methods (ICATM). We noted the type of skin sensitization data required for each chemical sector and whether these data were used in a hazard classification, potency classification, or risk assessment context; the preferred tests; and whether alternative non-animal tests were acceptable. An understanding of national and regional regulatory requirements for skin sensitization testing will inform the development of ICATM's international strategy for the acceptance and implementation of non-animal alternatives to assess the health hazards and risks associated with potential skin sensitizers. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Computer program documentation for the patch subsampling processor

    NASA Technical Reports Server (NTRS)

    Nieves, M. J.; Obrien, S. O.; Oney, J. K. (Principal Investigator)

    1981-01-01

    The programs presented are intended to provide a way to extract a sample from a full-frame scene and summarize it in a useful way. The sample in each case was chosen to fill a 512-by-512 pixel (sample-by-line) image since this is the largest image that can be displayed on the Integrated Multivariant Data Analysis and Classification System. This sample size provides one megabyte of data for manipulation and storage and contains about 3% of the full-frame data. A patch image processor computes means for 256 32-by-32 pixel squares which constitute the 512-by-512 pixel image. Thus, 256 measurements are available for 8 vegetation indexes over a 100-mile square.

  14. Adaptive technique for matching the spectral response in skin lesions' images

    NASA Astrophysics Data System (ADS)

    Pavlova, P.; Borisova, E.; Pavlova, E.; Avramov, L.

    2015-03-01

    The suggested technique is a subsequent stage for data obtaining from diffuse reflectance spectra and images of diseased tissue with a final aim of skin cancer diagnostics. Our previous work allows us to extract patterns for some types of skin cancer, as a ratio between spectra, obtained from healthy and diseased tissue in the range of 380 - 780 nm region. The authenticity of the patterns depends on the tested point into the area of lesion, and the resulting diagnose could also be fixed with some probability. In this work, two adaptations are implemented to localize pixels of the image lesion, where the reflectance spectrum corresponds to pattern. First adapts the standard to the personal patient and second - translates the spectrum white point basis to the relative white point of the image. Since the reflectance spectra and the image pixels are regarding to different white points, a correction of the compared colours is needed. The latest is done using a standard method for chromatic adaptation. The technique follows the steps below: -Calculation the colorimetric XYZ parameters for the initial white point, fixed by reflectance spectrum from healthy tissue; -Calculation the XYZ parameters for the distant white point on the base of image of nondiseased tissue; -Transformation the XYZ parameters for the test-spectrum by obtained matrix; -Finding the RGB values of the XYZ parameters for the test-spectrum according sRGB; Finally, the pixels of the lesion's image, corresponding to colour from the test-spectrum and particular diagnostic pattern are marked with a specific colour.

  15. Threshold selection for classification of MR brain images by clustering method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moldovanu, Simona; Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi; Obreja, Cristian

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzedmore » images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.« less

  16. Multi-target detection and positioning in crowds using multiple camera surveillance

    NASA Astrophysics Data System (ADS)

    Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng

    2018-04-01

    In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.

  17. Segmentation method of eye region based on fuzzy logic system for classifying open and closed eyes

    NASA Astrophysics Data System (ADS)

    Kim, Ki Wan; Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Lee, Eui Chul; Park, Kang Ryoung

    2015-03-01

    The classification of eye openness and closure has been researched in various fields, e.g., driver drowsiness detection, physiological status analysis, and eye fatigue measurement. For a classification with high accuracy, accurate segmentation of the eye region is required. Most previous research used the segmentation method by image binarization on the basis that the eyeball is darker than skin, but the performance of this approach is frequently affected by thick eyelashes or shadows around the eye. Thus, we propose a fuzzy-based method for classifying eye openness and closure. First, the proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. Second, the eye region is binarized using the fuzzy logic system based on I and K inputs, which is less affected by eyelashes and shadows around the eye. The combined image of I and K pixels is obtained through the fuzzy logic system. Third, in order to reflect the effect by all the inference values on calculating the output score of the fuzzy system, we use the revised weighted average method, where all the rectangular regions by all the inference values are considered for calculating the output score. Fourth, the classification of eye openness or closure is successfully made by the proposed fuzzy-based method with eye images of low resolution which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the chosen database. Experimental results with two databases of eye images show that our method is superior to previous approaches.

  18. A semi-automatic method for quantification and classification of erythrocytes infected with malaria parasites in microscopic images.

    PubMed

    Díaz, Gloria; González, Fabio A; Romero, Eduardo

    2009-04-01

    Visual quantification of parasitemia in thin blood films is a very tedious, subjective and time-consuming task. This study presents an original method for quantification and classification of erythrocytes in stained thin blood films infected with Plasmodium falciparum. The proposed approach is composed of three main phases: a preprocessing step, which corrects luminance differences. A segmentation step that uses the normalized RGB color space for classifying pixels either as erythrocyte or background followed by an Inclusion-Tree representation that structures the pixel information into objects, from which erythrocytes are found. Finally, a two step classification process identifies infected erythrocytes and differentiates the infection stage, using a trained bank of classifiers. Additionally, user intervention is allowed when the approach cannot make a proper decision. Four hundred fifty malaria images were used for training and evaluating the method. Automatic identification of infected erythrocytes showed a specificity of 99.7% and a sensitivity of 94%. The infection stage was determined with an average sensitivity of 78.8% and average specificity of 91.2%.

  19. Remote sensing of submerged aquatic vegetation in lower Chesapeake Bay - A comparison of Landsat MSS to TM imagery

    NASA Technical Reports Server (NTRS)

    Ackleson, S. G.; Klemas, V.

    1987-01-01

    Landsat MSS and TM imagery, obtained simultaneously over Guinea Marsh, VA, as analyzed and compares for its ability to detect submerged aquatic vegetation (SAV). An unsupervised clustering algorithm was applied to each image, where the input classification parameters are defined as functions of apparent sensor noise. Class confidence and accuracy were computed for all water areas by comparing the classified images, pixel-by-pixel, to rasterized SAV distributions derived from color aerial photography. To illustrate the effect of water depth on classification error, areas of depth greater than 1.9 m were masked, and class confidence and accuracy recalculated. A single-scattering radiative-transfer model is used to illustrate how percent canopy cover and water depth affect the volume reflectance from a water column containing SAV. For a submerged canopy that is morphologically and optically similar to Zostera marina inhabiting Lower Chesapeake Bay, dense canopies may be isolated by masking optically deep water. For less dense canopies, the effect of increasing water depth is to increase the apparent percent crown cover, which may result in classification error.

  20. Bolivian satellite technology program on ERTS natural resources

    NASA Technical Reports Server (NTRS)

    Brockmann, H. C. (Principal Investigator); Bartoluccic C., L.; Hoffer, R. M.; Levandowski, D. W.; Ugarte, I.; Valenzuela, R. R.; Urena E., M.; Oros, R.

    1977-01-01

    The author has identified the following significant results. Application of digital classification for mapping land use permitted the separation of units at more specific levels in less time. A correct classification of data in the computer has a positive effect on the accuracy of the final products. Land use unit comparison with types of soils as represented by the colors of the coded map showed a class relation. Soil types in relation to land cover and land use demonstrated that vegetation was a positive factor in soils classification. Groupings of image resolution elements (pixels) permit studies of land use at different levels, thereby forming parameters for the classification of soils.

  1. LANDSAT landcover information applied to regional planning decisions. [Prince Edward County, Virginia

    NASA Technical Reports Server (NTRS)

    Dixon, C. M.

    1981-01-01

    Land cover information derived from LANDSAT is being utilized by Piedmont Planning District Commission located in the State of Virginia. Progress to date is reported on a level one land cover classification map being produced with nine categories. The nine categories of classification are defined. The computer compatible tape selection is presented. Two unsupervised classifications were done, with 50 and 70 classes respectively. Twenty-eight spectral classes were developed using the supervised technique, employing actual ground truth training sites. The accuracy of the unsupervised classifications are estimated through comparison with local county statistics and with an actual pixel count of LANDSAT information compared to ground truth.

  2. Estimation of the Botanical Composition of Clover-Grass Leys from RGB Images Using Data Simulation and Fully Convolutional Neural Networks

    PubMed Central

    Steen, Kim Arild; Green, Ole; Karstoft, Henrik

    2017-01-01

    Optimal fertilization of clover-grass fields relies on knowledge of the clover and grass fractions. This study shows how knowledge can be obtained by analyzing images collected in fields automatically. A fully convolutional neural network was trained to create a pixel-wise classification of clover, grass, and weeds in red, green, and blue (RGB) images of clover-grass mixtures. The estimated clover fractions of the dry matter from the images were found to be highly correlated with the real clover fractions of the dry matter, making this a cheap and non-destructive way of monitoring clover-grass fields. The network was trained solely on simulated top-down images of clover-grass fields. This enables the network to distinguish clover, grass, and weed pixels in real images. The use of simulated images for training reduces the manual labor to a few hours, as compared to more than 3000 h when all the real images are annotated for training. The network was tested on images with varied clover/grass ratios and achieved an overall pixel classification accuracy of 83.4%, while estimating the dry matter clover fraction with a standard deviation of 7.8%. PMID:29258215

  3. An intelligent support system for automatic detection of cerebral vascular accidents from brain CT images.

    PubMed

    Hajimani, Elmira; Ruano, M G; Ruano, A E

    2017-07-01

    This paper presents a Radial Basis Functions Neural Network (RBFNN) based detection system, for automatic identification of Cerebral Vascular Accidents (CVA) through analysis of Computed Tomographic (CT) images. For the design of a neural network classifier, a Multi Objective Genetic Algorithm (MOGA) framework is used to determine the architecture of the classifier, its corresponding parameters and input features by maximizing the classification precision, while ensuring generalization. This approach considers a large number of input features, comprising first and second order pixel intensity statistics, as well as symmetry/asymmetry information with respect to the ideal mid-sagittal line. Values of specificity of 98% and sensitivity of 98% were obtained, at pixel level, by an ensemble of non-dominated models generated by MOGA, in a set of 150 CT slices (1,867,602pixels), marked by a NeuroRadiologist. This approach also compares favorably at a lesion level with three other published solutions, in terms of specificity (86% compared with 84%), degree of coincidence of marked lesions (89% compared with 77%) and classification accuracy rate (96% compared with 88%). Copyright © 2017. Published by Elsevier B.V.

  4. Accuracy assessments and areal estimates using two-phase stratified random sampling, cluster plots, and the multivariate composite estimator

    Treesearch

    Raymond L. Czaplewski

    2000-01-01

    Consider the following example of an accuracy assessment. Landsat data are used to build a thematic map of land cover for a multicounty region. The map classifier (e.g., a supervised classification algorithm) assigns each pixel into one category of land cover. The classification system includes 12 different types of forest and land cover: black spruce, balsam fir,...

  5. Accuracy assessment of biomass and forested area classification from modis, landstat-tm satellite imagery and forest inventory plot data

    Treesearch

    Dumitru Salajanu; Dennis M. Jacobs

    2007-01-01

    The objective of this study was to determine how well forestfnon-forest and biomass classifications obtained from Landsat-TM and MODIS satellite data modeled with FIA plots, compare to each other and with forested area and biomass estimates from the national inventory data, as well as whether there is an increase in overall accuracy when pixel size (spatial resolution...

  6. Clinical study of noninvasive in vivo melanoma and nonmelanoma skin cancers using multimodal spectral diagnosis

    PubMed Central

    Lim, Liang; Nichols, Brandon; Migden, Michael R.; Rajaram, Narasimhan; Reichenberg, Jason S.; Markey, Mia K.; Ross, Merrick I.; Tunnell, James W.

    2014-01-01

    Abstract. The goal of this study was to determine the diagnostic capability of a multimodal spectral diagnosis (SD) for in vivo noninvasive disease diagnosis of melanoma and nonmelanoma skin cancers. We acquired reflectance, fluorescence, and Raman spectra from 137 lesions in 76 patients using custom-built optical fiber-based clinical systems. Biopsies of lesions were classified using standard histopathology as malignant melanoma (MM), nonmelanoma pigmented lesion (PL), basal cell carcinoma (BCC), actinic keratosis (AK), and squamous cell carcinoma (SCC). Spectral data were analyzed using principal component analysis. Using multiple diagnostically relevant principal components, we built leave-one-out logistic regression classifiers. Classification results were compared with histopathology of the lesion. Sensitivity/specificity for classifying MM versus PL (12 versus 17 lesions) was 100%/100%, for SCC and BCC versus AK (57 versus 14 lesions) was 95%/71%, and for AK and SCC and BCC versus normal skin (71 versus 71 lesions) was 90%/85%. The best classification for nonmelanoma skin cancers required multiple modalities; however, the best melanoma classification occurred with Raman spectroscopy alone. The high diagnostic accuracy for classifying both melanoma and nonmelanoma skin cancer lesions demonstrates the potential for SD as a clinical diagnostic device. PMID:25375350

  7. Clinical study of noninvasive in vivo melanoma and nonmelanoma skin cancers using multimodal spectral diagnosis

    NASA Astrophysics Data System (ADS)

    Lim, Liang; Nichols, Brandon; Migden, Michael R.; Rajaram, Narasimhan; Reichenberg, Jason S.; Markey, Mia K.; Ross, Merrick I.; Tunnell, James W.

    2014-11-01

    The goal of this study was to determine the diagnostic capability of a multimodal spectral diagnosis (SD) for in vivo noninvasive disease diagnosis of melanoma and nonmelanoma skin cancers. We acquired reflectance, fluorescence, and Raman spectra from 137 lesions in 76 patients using custom-built optical fiber-based clinical systems. Biopsies of lesions were classified using standard histopathology as malignant melanoma (MM), nonmelanoma pigmented lesion (PL), basal cell carcinoma (BCC), actinic keratosis (AK), and squamous cell carcinoma (SCC). Spectral data were analyzed using principal component analysis. Using multiple diagnostically relevant principal components, we built leave-one-out logistic regression classifiers. Classification results were compared with histopathology of the lesion. Sensitivity/specificity for classifying MM versus PL (12 versus 17 lesions) was 100%;/100%;, for SCC and BCC versus AK (57 versus 14 lesions) was 95%;/71%, and for AK and SCC and BCC versus normal skin (71 versus 71 lesions) was 90%/85%. The best classification for nonmelanoma skin cancers required multiple modalities; however, the best melanoma classification occurred with Raman spectroscopy alone. The high diagnostic accuracy for classifying both melanoma and nonmelanoma skin cancer lesions demonstrates the potential for SD as a clinical diagnostic device.

  8. Obsessive-compulsive skin disorders: a novel classification based on degree of insight.

    PubMed

    Zhu, Tian Hao; Nakamura, Mio; Farahnik, Benjamin; Abrouk, Michael; Reichenberg, Jason; Bhutani, Tina; Koo, John

    2017-06-01

    Individuals with obsessive-compulsive features frequently visit dermatologists for complaints of the skin, hair or nails, and often progress towards a chronic relapsing course due to the challenge associated with accurate diagnosis and management of their psychiatric symptoms. The current DSM-5 formally recognizes body dysmorphic disorder, trichotillomania, neurotic excoriation and body focused repetitive behavior disorder as psychodermatological disorders belonging to the category of Obsessive-Compulsive and Related Disorders. However there is evidence that other relevant skin diseases such as delusions of parasitosis, dermatitis artefacta, contamination dermatitis, AIDS phobia, trichotemnomania and even lichen simplex chronicus possess prominent obsessive-compulsive characteristics that do not necessarily fit the full diagnostic criteria of the DSM-5. Therefore, to increase dermatologists' awareness of this unique group of skin disorders with OCD features, we propose a novel classification system called Obsessive-Compulsive Insight Continuum. Under this new classification system, obsessive-compulsive skin manifestations are categorized along a continuum based on degree of insight, from minimal insight with delusional obsessions to good insight with minimal obsessions. Understanding the level of insight is thus an important first step for clinicians who routinely interact with these patients.

  9. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network.

    PubMed

    Li, Yuexiang; Shen, Linlin

    2018-02-11

    Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved.

  10. Classification of Urban Aerial Data Based on Pixel Labelling with Deep Convolutional Neural Networks and Logistic Regression

    NASA Astrophysics Data System (ADS)

    Yao, W.; Poleswki, P.; Krzystek, P.

    2016-06-01

    The recent success of deep convolutional neural networks (CNN) on a large number of applications can be attributed to large amounts of available training data and increasing computing power. In this paper, a semantic pixel labelling scheme for urban areas using multi-resolution CNN and hand-crafted spatial-spectral features of airborne remotely sensed data is presented. Both CNN and hand-crafted features are applied to image/DSM patches to produce per-pixel class probabilities with a L1-norm regularized logistical regression classifier. The evidence theory infers a degree of belief for pixel labelling from different sources to smooth regions by handling the conflicts present in the both classifiers while reducing the uncertainty. The aerial data used in this study were provided by ISPRS as benchmark datasets for 2D semantic labelling tasks in urban areas, which consists of two data sources from LiDAR and color infrared camera. The test sites are parts of a city in Germany which is assumed to consist of typical object classes including impervious surfaces, trees, buildings, low vegetation, vehicles and clutter. The evaluation is based on the computation of pixel-based confusion matrices by random sampling. The performance of the strategy with respect to scene characteristics and method combination strategies is analyzed and discussed. The competitive classification accuracy could be not only explained by the nature of input data sources: e.g. the above-ground height of nDSM highlight the vertical dimension of houses, trees even cars and the nearinfrared spectrum indicates vegetation, but also attributed to decision-level fusion of CNN's texture-based approach with multichannel spatial-spectral hand-crafted features based on the evidence combination theory.

  11. Epidermal photonic devices for quantitative imaging of temperature and thermal transport characteristics of the skin

    NASA Astrophysics Data System (ADS)

    Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Chad Webb, R.; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A.

    2014-09-01

    Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or ‘epidermal’, photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.

  12. Epidermal photonic devices for quantitative imaging of temperature and thermal transport characteristics of the skin.

    PubMed

    Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Webb, R Chad; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A

    2014-09-19

    Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or 'epidermal', photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50 mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.

  13. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  14. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  15. Land use mapping from CBERS-2 images with open source tools by applying different classification algorithms

    NASA Astrophysics Data System (ADS)

    Sanhouse-García, Antonio J.; Rangel-Peraza, Jesús Gabriel; Bustos-Terrones, Yaneth; García-Ferrer, Alfonso; Mesas-Carrascosa, Francisco J.

    2016-02-01

    Land cover classification is often based on different characteristics between their classes, but with great homogeneity within each one of them. This cover is obtained through field work or by mean of processing satellite images. Field work involves high costs; therefore, digital image processing techniques have become an important alternative to perform this task. However, in some developing countries and particularly in Casacoima municipality in Venezuela, there is a lack of geographic information systems due to the lack of updated information and high costs in software license acquisition. This research proposes a low cost methodology to develop thematic mapping of local land use and types of coverage in areas with scarce resources. Thematic mapping was developed from CBERS-2 images and spatial information available on the network using open source tools. The supervised classification method per pixel and per region was applied using different classification algorithms and comparing them among themselves. Classification method per pixel was based on Maxver algorithms (maximum likelihood) and Euclidean distance (minimum distance), while per region classification was based on the Bhattacharya algorithm. Satisfactory results were obtained from per region classification, where overall reliability of 83.93% and kappa index of 0.81% were observed. Maxver algorithm showed a reliability value of 73.36% and kappa index 0.69%, while Euclidean distance obtained values of 67.17% and 0.61% for reliability and kappa index, respectively. It was demonstrated that the proposed methodology was very useful in cartographic processing and updating, which in turn serve as a support to develop management plans and land management. Hence, open source tools showed to be an economically viable alternative not only for forestry organizations, but for the general public, allowing them to develop projects in economically depressed and/or environmentally threatened areas.

  16. Two-tier tissue decomposition for histopathological image representation and classification.

    PubMed

    Gultekin, Tunc; Koyuncu, Can Fahrettin; Sokmensuer, Cenk; Gunduz-Demir, Cigdem

    2015-01-01

    In digital pathology, devising effective image representations is crucial to design robust automated diagnosis systems. To this end, many studies have proposed to develop object-based representations, instead of directly using image pixels, since a histopathological image may contain a considerable amount of noise typically at the pixel-level. These previous studies mostly employ color information to define their objects, which approximately represent histological tissue components in an image, and then use the spatial distribution of these objects for image representation and classification. Thus, object definition has a direct effect on the way of representing the image, which in turn affects classification accuracies. In this paper, our aim is to design a classification system for histopathological images. Towards this end, we present a new model for effective representation of these images that will be used by the classification system. The contributions of this model are twofold. First, it introduces a new two-tier tissue decomposition method for defining a set of multityped objects in an image. Different than the previous studies, these objects are defined combining texture, shape, and size information and they may correspond to individual histological tissue components as well as local tissue subregions of different characteristics. As its second contribution, it defines a new metric, which we call dominant blob scale, to characterize the shape and size of an object with a single scalar value. Our experiments on colon tissue images reveal that this new object definition and characterization provides distinguishing representation of normal and cancerous histopathological images, which is effective to obtain more accurate classification results compared to its counterparts.

  17. All-passive pixel super-resolution of time-stretch imaging

    PubMed Central

    Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.

    2017-01-01

    Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936

  18. As-built design specification for PARCLS

    NASA Technical Reports Server (NTRS)

    Tompkins, M. A. (Principal Investigator)

    1981-01-01

    The PARCLS program, part of the CLASFYG package, reads a parameter file created by the CLASFYG program and a pure pixel ground truth file in order to create to classification file of three separate crop categories in universal format.

  19. Robust efficient estimation of heart rate pulse from video.

    PubMed

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-04-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity.

  20. Robust efficient estimation of heart rate pulse from video

    PubMed Central

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-01-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294

  1. Mixing geometric and radiometric features for change classification

    NASA Astrophysics Data System (ADS)

    Fournier, Alexandre; Descombes, Xavier; Zerubia, Josiane

    2008-02-01

    Most basic change detection algorithms use a pixel-based approach. Whereas such approach is quite well defined for monitoring important area changes (such as urban growth monitoring) in low resolution images, an object based approach seems more relevant when the change detection is specifically aimed toward targets (such as small buildings and vehicles). In this paper, we present an approach that mixes radiometric and geometric features to qualify the changed zones. The goal is to establish bounds (appearance, disappearance, substitution ...) between the detected changes and the underlying objects. We proceed by first clustering the change map (containing each pixel bitemporal radiosity) in different classes using the entropy-kmeans algorithm. Assuming that most man-made objects have a polygonal shape, a polygonal approximation algorithm is then used in order to characterize the resulting zone shapes. Hence allowing us to refine the primary rough classification, by integrating the polygon orientations in the state space. Tests are currently conducted on Quickbird data.

  2. A Subpixel Classification of Multispectral Satellite Imagery for Interpetation of Tundra-Taiga Ecotone Vegetation (Case Study on Tuliok River Valley, Khibiny, Russia)

    NASA Astrophysics Data System (ADS)

    Mikheeva, A. I.; Tutubalina, O. V.; Zimin, M. V.; Golubeva, E. I.

    2017-12-01

    The tundra-taiga ecotone plays significant role in northern ecosystems. Due to global climatic changes, the vegetation of the ecotone is the key object of many remote-sensing studies. The interpretation of vegetation and nonvegetation objects of the tundra-taiga ecotone on satellite imageries of a moderate resolution is complicated by the difficulty of extracting these objects from the spectral and spatial mixtures within a pixel. This article describes a method for the subpixel classification of Terra ASTER satellite image for vegetation mapping of the tundra-taiga ecotone in the Tuliok River, Khibiny Mountains, Russia. It was demonstrated that this method allows to determine the position of the boundaries of ecotone objects and their abundance on the basis of quantitative criteria, which provides a more accurate characteristic of ecotone vegetation when compared to the per-pixel approach of automatic imagery interpretation.

  3. Three-dimensional object recognition using similar triangles and decision trees

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  4. User oriented ERTS-1 images. [vegetation identification in Canada through image enhancement

    NASA Technical Reports Server (NTRS)

    Shlien, S.; Goodenough, D.

    1974-01-01

    Photographic reproduction of ERTS-1 images are capable of displaying only a portion of the total information available from the multispectral scanner. Methods are being developed to generate ERTS-1 images oriented towards special users such as agriculturists, foresters, and hydrologists by applying image enhancement techniques and interactive statistical classification schemes. Spatial boundaries and linear features can be emphasized and delineated using simple filters. Linear and nonlinear transformations can be applied to the spectral data to emphasize certain ground information. An automatic classification scheme was developed to identify particular ground cover classes such as fallow, grain, rape seed or various vegetation covers. The scheme applies the maximum likelihood decision rule to the spectral information and classifies the ERTS-1 image on a pixel by pixel basis. Preliminary results indicate that the classifier has limited success in distinguishing crops, but is well adapted for identifying different types of vegetation.

  5. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  6. Cloud field classification based on textural features

    NASA Technical Reports Server (NTRS)

    Sengupta, Sailes Kumar

    1989-01-01

    An essential component in global climate research is accurate cloud cover and type determination. Of the two approaches to texture-based classification (statistical and textural), only the former is effective in the classification of natural scenes such as land, ocean, and atmosphere. In the statistical approach that was adopted, parameters characterizing the stochastic properties of the spatial distribution of grey levels in an image are estimated and then used as features for cloud classification. Two types of textural measures were used. One is based on the distribution of the grey level difference vector (GLDV), and the other on a set of textural features derived from the MaxMin cooccurrence matrix (MMCM). The GLDV method looks at the difference D of grey levels at pixels separated by a horizontal distance d and computes several statistics based on this distribution. These are then used as features in subsequent classification. The MaxMin tectural features on the other hand are based on the MMCM, a matrix whose (I,J)th entry give the relative frequency of occurrences of the grey level pair (I,J) that are consecutive and thresholded local extremes separated by a given pixel distance d. Textural measures are then computed based on this matrix in much the same manner as is done in texture computation using the grey level cooccurrence matrix. The database consists of 37 cloud field scenes from LANDSAT imagery using a near IR visible channel. The classification algorithm used is the well known Stepwise Discriminant Analysis. The overall accuracy was estimated by the percentage or correct classifications in each case. It turns out that both types of classifiers, at their best combination of features, and at any given spatial resolution give approximately the same classification accuracy. A neural network based classifier with a feed forward architecture and a back propagation training algorithm is used to increase the classification accuracy, using these two classes of features. Preliminary results based on the GLDV textural features alone look promising.

  7. Spectral-spatial hyperspectral image classification using super-pixel-based spatial pyramid representation

    NASA Astrophysics Data System (ADS)

    Fan, Jiayuan; Tan, Hui Li; Toomik, Maria; Lu, Shijian

    2016-10-01

    Spatial pyramid matching has demonstrated its power for image recognition task by pooling features from spatially increasingly fine sub-regions. Motivated by the concept of feature pooling at multiple pyramid levels, we propose a novel spectral-spatial hyperspectral image classification approach using superpixel-based spatial pyramid representation. This technique first generates multiple superpixel maps by decreasing the superpixel number gradually along with the increased spatial regions for labelled samples. By using every superpixel map, sparse representation of pixels within every spatial region is then computed through local max pooling. Finally, features learned from training samples are aggregated and trained by a support vector machine (SVM) classifier. The proposed spectral-spatial hyperspectral image classification technique has been evaluated on two public hyperspectral datasets, including the Indian Pines image containing 16 different agricultural scene categories with a 20m resolution acquired by AVIRIS and the University of Pavia image containing 9 land-use categories with a 1.3m spatial resolution acquired by the ROSIS-03 sensor. Experimental results show significantly improved performance compared with the state-of-the-art works. The major contributions of this proposed technique include (1) a new spectral-spatial classification approach to generate feature representation for hyperspectral image, (2) a complementary yet effective feature pooling approach, i.e. the superpixel-based spatial pyramid representation that is used for the spatial correlation study, (3) evaluation on two public hyperspectral image datasets with superior image classification performance.

  8. Automatic classification of endoscopic images for premalignant conditions of the esophagus

    NASA Astrophysics Data System (ADS)

    Boschetto, Davide; Gambaretto, Gloria; Grisan, Enrico

    2016-03-01

    Barrett's esophagus (BE) is a precancerous complication of gastroesophageal reflux disease in which normal stratified squamous epithelium lining the esophagus is replaced by intestinal metaplastic columnar epithelium. Repeated endoscopies and multiple biopsies are often necessary to establish the presence of intestinal metaplasia. Narrow Band Imaging (NBI) is an imaging technique commonly used with endoscopies that enhances the contrast of vascular pattern on the mucosa. We present a computer-based method for the automatic normal/metaplastic classification of endoscopic NBI images. Superpixel segmentation is used to identify and cluster pixels belonging to uniform regions. From each uniform clustered region of pixels, eight features maximizing differences among normal and metaplastic epithelium are extracted for the classification step. For each superpixel, the three mean intensities of each color channel are firstly selected as features. Three added features are the mean intensities for each superpixel after separately applying to the red-channel image three different morphological filters (top-hat filtering, entropy filtering and range filtering). The last two features require the computation of the Grey-Level Co-Occurrence Matrix (GLCM), and are reflective of the contrast and the homogeneity of each superpixel. The classification step is performed using an ensemble of 50 classification trees, with a 10-fold cross-validation scheme by training the classifier at each step on a random 70% of the images and testing on the remaining 30% of the dataset. Sensitivity and Specificity are respectively of 79.2% and 87.3%, with an overall accuracy of 83.9%.

  9. Mapping Sub-Antarctic Cushion Plants Using Random Forests to Combine Very High Resolution Satellite Imagery and Terrain Modelling

    PubMed Central

    Bricher, Phillippa K.; Lucieer, Arko; Shaw, Justine; Terauds, Aleks; Bergstrom, Dana M.

    2013-01-01

    Monitoring changes in the distribution and density of plant species often requires accurate and high-resolution baseline maps of those species. Detecting such change at the landscape scale is often problematic, particularly in remote areas. We examine a new technique to improve accuracy and objectivity in mapping vegetation, combining species distribution modelling and satellite image classification on a remote sub-Antarctic island. In this study, we combine spectral data from very high resolution WorldView-2 satellite imagery and terrain variables from a high resolution digital elevation model to improve mapping accuracy, in both pixel- and object-based classifications. Random forest classification was used to explore the effectiveness of these approaches on mapping the distribution of the critically endangered cushion plant Azorella macquariensis Orchard (Apiaceae) on sub-Antarctic Macquarie Island. Both pixel- and object-based classifications of the distribution of Azorella achieved very high overall validation accuracies (91.6–96.3%, κ = 0.849–0.924). Both two-class and three-class classifications were able to accurately and consistently identify the areas where Azorella was absent, indicating that these maps provide a suitable baseline for monitoring expected change in the distribution of the cushion plants. Detecting such change is critical given the threats this species is currently facing under altering environmental conditions. The method presented here has applications to monitoring a range of species, particularly in remote and isolated environments. PMID:23940805

  10. Simulating urban land cover changes at sub-pixel level in a coastal city

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaofeng; Deng, Lei; Feng, Huihui; Zhao, Yanchuang

    2014-10-01

    The simulation of urban expansion or land cover changes is a major theme in both geographic information science and landscape ecology. Yet till now, almost all of previous studies were based on grid computations at pixel level. With the prevalence of spectral mixture analysis in urban land cover research, the simulation of urban land cover at sub-pixel level is being put into agenda. This study provided a new approach of land cover simulation at sub-pixel level. Landsat TM/ETM+ images of Xiamen city, China on both the January of 2002 and 2007 were used to acquire land cover data through supervised classification. Then the two classified land cover data were utilized to extract the transformation rule between 2002 and 2007 using logistic regression. The transformation possibility of each land cover type in a certain pixel was taken as its percent in the same pixel after normalization. And cellular automata (CA) based grid computation was carried out to acquire simulated land cover on 2007. The simulated 2007 sub-pixel land cover was testified with a validated sub-pixel land cover achieved by spectral mixture analysis in our previous studies on the same date. And finally the sub-pixel land cover of 2017 was simulated for urban planning and management. The results showed that our method is useful in land cover simulation at sub-pixel level. Although the simulation accuracy is not quite satisfactory for all the land cover types, it provides an important idea and a good start in the CA-based urban land cover simulation.

  11. Mixed deep learning and natural language processing method for fake-food image recognition and standardization to help automated dietary assessment.

    PubMed

    Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara

    2018-04-06

    The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.

  12. Hyperspectral image classification based on local binary patterns and PCANet

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  13. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    NASA Astrophysics Data System (ADS)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  14. Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes

    EPA Science Inventory

    Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and moni...

  15. Spatial-spectral blood cell classification with microscopic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ran, Qiong; Chang, Lan; Li, Wei; Xu, Xiaofeng

    2017-10-01

    Microscopic hyperspectral images provide a new way for blood cell examination. The hyperspectral imagery can greatly facilitate the classification of different blood cells. In this paper, the microscopic hyperspectral images are acquired by connecting the microscope and the hyperspectral imager, and then tested for blood cell classification. For combined use of the spectral and spatial information provided by hyperspectral images, a spatial-spectral classification method is improved from the classical extreme learning machine (ELM) by integrating spatial context into the image classification task with Markov random field (MRF) model. Comparisons are done among ELM, ELM-MRF, support vector machines(SVM) and SVMMRF methods. Results show the spatial-spectral classification methods(ELM-MRF, SVM-MRF) perform better than pixel-based methods(ELM, SVM), and the proposed ELM-MRF has higher precision and show more accurate location of cells.

  16. Application of Polynomial Neural Networks to Classification of Acoustic Warfare Signals

    DTIC Science & Technology

    1993-04-01

    on Neural Networks, Vol. II, Jun’e, 1987. [66] Shynk, J.J., "Adaptive IIR filtering," IEEE ASSP Magazine, Vol. 6, No. 2, Apr. 1989. 175 I [67] Specht ...rows This is the size of the yellow capture window which will be displayed on the screen. The best setting for pixel-rows is two greater than exemplar...exemplar size of 4 to be captured by the PNN. The pixel-rows setting is 6, which allows all four rows of I the retina data to fit inside yellow capture

  17. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network

    PubMed Central

    2018-01-01

    Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved. PMID:29439500

  18. Analysis and classification of normal and pathological skin tissue spectra using neural networks

    NASA Astrophysics Data System (ADS)

    Bruch, Reinhard F.; Afanasyeva, Natalia I.; Gummuluri, Satyashree

    2000-07-01

    An innovative spectroscopic diagnostic method has been developed for investigation of different regions of normal human skin tissue, as well as cancerous and precancerous conditions in vivo, ex vivo and in vitro. This new method is a combination of fiber-optical evanescent wave Fourier Transform infrared (FEW-FTIR) spectroscopy and fiber optic techniques using low-loss, highly flexible and nontoxic fiber optical sensors. The FEW-FTIR technique is nondestructive and very sensitive to changes of vibrational spectra in the IR region without heating and staining and thus altering the skin tissue. A special software package was developed for the treatment of the spectra. This package includes a database, programs for data preparation and presentation, and neural networks for classification of disease states. An unsupervised neural competitive learning neural network is implemented for skin cancer diagnosis. In this study, we have investigated and classified skin tissue in the range of 1400 to 1800 cm-1 using these programs. The results of our surface analysis of skin tissue are discussed in terms of molecular structural similarities and differences as well as in terms of different skin states represented by eleven different skin spectra classes.

  19. The Color of Health: Skin Color, Ethnoracial Classification, and Discrimination in the Health of Latin Americans

    PubMed Central

    Perreira, Krista M.; Telles, Edward E.

    2014-01-01

    Latin America is one of the most ethnoracially heterogeneous regions of the world. Despite this, health disparities research in Latin America tends to focus on gender, class and regional health differences while downplaying ethnoracial differences. Few scholars have conducted studies of ethnoracial identification and health disparities in Latin America. Research that examines multiple measures of ethnoracial identification is rarer still. Official data on race/ethnicity in Latin America are based on self-identification which can differ from interviewer-ascribed or phenotypic classification based on skin color. We use data from Brazil, Colombia, Mexico, and Peru to examine associations of interviewer-ascribed skin color, interviewer-ascribed race/ethnicity, and self-reported race/ethnicity with self-rated health among Latin American adults (ages 18-65). We also examine associations of observer-ascribed skin color with three additional correlates of health – skin color discrimination, class discrimination, and socio-economic status. We find a significant gradient in self-rated health by skin color. Those with darker skin colors report poorer health. Darker skin color influences self-rated health primarily by increasing exposure to class discrimination and low socio-economic status. PMID:24957692

  20. The color of health: skin color, ethnoracial classification, and discrimination in the health of Latin Americans.

    PubMed

    Perreira, Krista M; Telles, Edward E

    2014-09-01

    Latin America is one of the most ethnoracially heterogeneous regions of the world. Despite this, health disparities research in Latin America tends to focus on gender, class and regional health differences while downplaying ethnoracial differences. Few scholars have conducted studies of ethnoracial identification and health disparities in Latin America. Research that examines multiple measures of ethnoracial identification is rarer still. Official data on race/ethnicity in Latin America are based on self-identification which can differ from interviewer-ascribed or phenotypic classification based on skin color. We use data from Brazil, Colombia, Mexico, and Peru to examine associations of interviewer-ascribed skin color, interviewer-ascribed race/ethnicity, and self-reported race/ethnicity with self-rated health among Latin American adults (ages 18-65). We also examine associations of observer-ascribed skin color with three additional correlates of health - skin color discrimination, class discrimination, and socio-economic status. We find a significant gradient in self-rated health by skin color. Those with darker skin colors report poorer health. Darker skin color influences self-rated health primarily by increasing exposure to class discrimination and low socio-economic status. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Detection of diluted contaminants on chicken carcasses using a two-dimensional scatter plot based on a two-dimensional hyperspectral correlation spectrum.

    PubMed

    Wu, Wei; Chen, Gui-Yun; Wu, Ming-Qing; Yu, Zhen-Wei; Chen, Kun-Jie

    2017-03-20

    A two-dimensional (2D) scatter plot method based on the 2D hyperspectral correlation spectrum is proposed to detect diluted blood, bile, and feces from the cecum and duodenum on chicken carcasses. First, from the collected hyperspectral data, a set of uncontaminated regions of interest (ROIs) and four sets of contaminated ROIs were selected, whose average spectra were treated as the original spectrum and influenced spectra, respectively. Then, the difference spectra were obtained and used to conduct correlation analysis, from which the 2D hyperspectral correlation spectrum was constructed using the analogy method of 2D IR correlation spectroscopy. Two maximum auto-peaks and a pair of cross peaks appeared at 656 and 474 nm. Therefore, 656 and 474 nm were selected as the characteristic bands because they were most sensitive to the spectral change induced by the contaminants. The 2D scatter plots of the contaminants, clean skin, and background in the 474- and 656-nm space were used to distinguish the contaminants from the clean skin and background. The threshold values of the 474- and 656-nm bands were determined by receiver operating characteristic (ROC) analysis. According to the ROC results, a pixel whose relative reflectance at 656 nm was greater than 0.5 and relative reflectance at 474 nm was lower than 0.3 was judged as a contaminated pixel. A region with more than 50 pixels identified was marked in the detection graph. This detection method achieved a recognition rate of up to 95.03% at the region level and 31.84% at the pixel level. The false-positive rate was only 0.82% at the pixel level. The results of this study confirm that the 2D scatter plot method based on the 2D hyperspectral correlation spectrum is an effective method for detecting diluted contaminants on chicken carcasses.

  2. Multiple Scale Landscape Pattern Index Interpretation for the Persistent Monitoring of Land-Cover and Land-Use

    NASA Astrophysics Data System (ADS)

    Spivey, Alvin J.

    Mapping land-cover land-use change (LCLUC) over regional and continental scales, and long time scales (years and decades), can be accomplished using thematically identified classification maps of a landscape---a LCLU class map. Observations of a landscape's LCLU class map pattern can indicate the most relevant process, like hydrologic or ecologic function, causing landscape scale environmental change. Quantified as Landscape Pattern Metrics (LPM), emergent landscape patterns act as Landscape Indicators (LI) when physically interpreted. The common mathematical approach to quantifying observed landscape scale pattern is to have LPM measure how connected a class exists within the landscape, through nonlinear local kernel operations of edges and gradients in class maps. Commonly applied kernel-based LPM that consistently reveal causal processes are Dominance, Contagion, and Fractal Dimension. These kernel-based LPM can be difficult to interpret. The emphasis on an image pixel's edge by gradient operations and dependence on an image pixel's existence according to classification accuracy limit the interpretation of LPM. For example, the Dominance and Contagion kernel-based LPM very similarly measure how connected a landscape is. Because of this, their reported edge measurements of connected pattern correlate strongly, making their results ambiguous. Additionally, each of these kernel-based LPM are unscalable when comparing class maps from separate imaging system sensor scenarios that change the image pixel's edge position (i.e. changes in landscape extent, changes in pixel size, changes in orientation, etc), and can only interpret landscape pattern as accurately as the LCLU map classification will allow. This dissertation discusses the reliability of common LPM in light of imaging system effects such as: algorithm classification likelihoods, LCLU classification accuracy due to random image sensor noise, and image scale. A description of an approach to generating well behaved LPM through a Fourier system analysis of the entire class map, or any subset of the class map (e.g. the watershed) is the focus of this work. The Fourier approach provides four improvements for LPM. First, the approach reduces any correlation between metrics by developing them within an independent (i.e. orthogonal) Fourier vector space; a Fourier vector space that includes relevant physically representative parameters ( i.e. between class Euclidean distance). Second, accounting for LCLU classification accuracy the LPM measurement precision and measurement accuracy are reported. Third, the mathematics of this approach makes it possible to compare image data captured at separate pixel resolutions or even from separate landscape scenes. Fourth, Fourier interpreted landscape pattern measurement can be a measure of the entire landscape shape, of individual landscape cover change, or as exchanges between class map subsets by operating on the entire class map, subset of class map, or separate subsets of class map[s] respectively. These LCLUC LPM are examined along the 1991-1992 and 2000-2001 records of National Land Cover Database Landsat data products. Those LPM results are used in a predictive fecal coliform model at the South Carolina watershed level in the context of past (validation study) change. Finally, the proposed LPM ability to be used as ecologically relevant environmental indicators is tested by correlating metrics with other, well known LI that consistently reveal causal processes in the literature.

  3. 7 CFR 51.1175 - Classification of defects.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the surface. Creasing Materially weakens the skin, or extends over more than one-third of the surface Seriously weakens the skin, or extends over more than one-half of the surface Very seriously weakens the skin, or is distributed over practically the entire surface. Dryness or mushy condition Affecting all...

  4. 21 CFR 880.5090 - Liquid bandage.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... powder and liquid combination used to cover an opening in the skin or as a dressing for burns. The device is also used as a topical skin protectant. (b) Classification. Class I (general controls). When used only as a skin protectant, the device is exempt from the premarket notification procedures in subpart E...

  5. 21 CFR 880.5090 - Liquid bandage.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... powder and liquid combination used to cover an opening in the skin or as a dressing for burns. The device is also used as a topical skin protectant. (b) Classification. Class I (general controls). When used only as a skin protectant, the device is exempt from the premarket notification procedures in subpart E...

  6. 21 CFR 880.5090 - Liquid bandage.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... powder and liquid combination used to cover an opening in the skin or as a dressing for burns. The device is also used as a topical skin protectant. (b) Classification. Class I (general controls). When used only as a skin protectant, the device is exempt from the premarket notification procedures in subpart E...

  7. 21 CFR 880.5090 - Liquid bandage.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... powder and liquid combination used to cover an opening in the skin or as a dressing for burns. The device is also used as a topical skin protectant. (b) Classification. Class I (general controls). When used only as a skin protectant, the device is exempt from the premarket notification procedures in subpart E...

  8. 21 CFR 880.5090 - Liquid bandage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... powder and liquid combination used to cover an opening in the skin or as a dressing for burns. The device is also used as a topical skin protectant. (b) Classification. Class I (general controls). When used only as a skin protectant, the device is exempt from the premarket notification procedures in subpart E...

  9. Dynamic infrared imaging for skin cancer screening

    NASA Astrophysics Data System (ADS)

    Godoy, Sebastián E.; Ramirez, David A.; Myers, Stephen A.; von Winckel, Greg; Krishna, Sanchita; Berwick, Marianne; Padilla, R. Steven; Sen, Pradeep; Krishna, Sanjay

    2015-05-01

    Dynamic thermal imaging (DTI) with infrared cameras is a non-invasive technique with the ability to detect the most common types of skin cancer. We discuss and propose a standardized analysis method for DTI of actual patient data, which achieves high levels of sensitivity and specificity by judiciously selecting pixels with the same initial temperature. This process compensates the intrinsic limitations of the cooling unit and is the key enabling tool in the DTI data analysis. We have extensively tested the methodology on human subjects using thermal infrared image sequences from a pilot study conducted jointly with the University of New Mexico Dermatology Clinic in Albuquerque, New Mexico (ClinicalTrials ID number NCT02154451). All individuals were adult subjects who were scheduled for biopsy or adult volunteers with clinically diagnosed benign condition. The sample size was 102 subjects for the present study. Statistically significant results were obtained that allowed us to distinguish between benign and malignant skin conditions. The sensitivity and specificity was 95% (with a 95% confidence interval of [87.8% 100.0%]) and 83% (with a 95% confidence interval of [73.4% 92.5%]), respectively, and with an area under the curve of 95%. Our results lead us to conclude that the DTI approach in conjunction with the judicious selection of pixels has the potential to provide a fast, accurate, non-contact, and non-invasive way to screen for common types of skin cancer. As such, it has the potential to significantly reduce the number of biopsies performed on suspicious lesions.

  10. Fuzzy C-means classification for corrosion evolution of steel images

    NASA Astrophysics Data System (ADS)

    Trujillo, Maite; Sadki, Mustapha

    2004-05-01

    An unavoidable problem of metal structures is their exposure to rust degradation during their operational life. Thus, the surfaces need to be assessed in order to avoid potential catastrophes. There is considerable interest in the use of patch repair strategies which minimize the project costs. However, to operate such strategies with confidence in the long useful life of the repair, it is essential that the condition of the existing coatings and the steel substrate can be accurately quantified and classified. This paper describes the application of fuzzy set theory for steel surfaces classification according to the steel rust time. We propose a semi-automatic technique to obtain image clustering using the Fuzzy C-means (FCM) algorithm and we analyze two kinds of data to study the classification performance. Firstly, we investigate the use of raw images" pixels without any pre-processing methods and neighborhood pixels. Secondly, we apply Gaussian noise to the images with different standard deviation to study the FCM method tolerance to Gaussian noise. The noisy images simulate the possible perturbations of the images due to the weather or rust deposits in the steel surfaces during typical on-site acquisition procedures

  11. High-resolution land cover classification using low resolution global data

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark J.

    2013-05-01

    A fusion approach is described that combines texture features from high-resolution panchromatic imagery with land cover statistics derived from co-registered low-resolution global databases to obtain high-resolution land cover maps. The method does not require training data or any human intervention. We use an MxN Gabor filter bank consisting of M=16 oriented bandpass filters (0-180°) at N resolutions (3-24 meters/pixel). The size range of these spatial filters is consistent with the typical scale of manmade objects and patterns of cultural activity in imagery. Clustering reduces the complexity of the data by combining pixels that have similar texture into clusters (regions). Texture classification assigns a vector of class likelihoods to each cluster based on its textural properties. Classification is unsupervised and accomplished using a bank of texture anomaly detectors. Class likelihoods are modulated by land cover statistics derived from lower resolution global data over the scene. Preliminary results from a number of Quickbird scenes show our approach is able to classify general land cover features such as roads, built up area, forests, open areas, and bodies of water over a wide range of scenes.

  12. Objective color measurements: clinimetric performance of three devices on normal skin and scar tissue.

    PubMed

    van der Wal, Martijn; Bloemen, Monica; Verhaegen, Pauline; Tuinebreijer, Wim; de Vet, Henrica; van Zuijlen, Paul; Middelkoop, Esther

    2013-01-01

    Color measurements are an essential part of scar evaluation. Thus, vascularization (erythema) and pigmentation (melanin) are common outcome parameters in scar research. The aim of this study was to investigate the clinimetric properties and clinical feasibility of the Mexameter, Colorimeter, and the DSM II ColorMeter for objective measurements on skin and scars. Fifty scars with a mean age of 6 years (2 months to 53 years) were included. Reliability was tested using the single-measure interobserver intraclass correlation coefficient. Validity was determined by measuring the Pearson correlation with the Fitzpatrick skin type classification (for skin) and the Patient and Observer Scar Assessment Scale (for scar tissue). All three instruments provided reliable readings (intraclass correlation coefficient ≥ 0.83; confidence interval: 0.71-0.90) on normal skin and scar tissue. Parameters with the highest correlations with the Fitzpatrick classification were melanin (Mexameter), 0.72; ITA (Colorimeter), -0.74; and melanin (DSM II), 0.70. On scars, the highest correlations with the Patient and Observer Scar Assessment Scale vascularization scores were the following: erythema (Mexameter), 0.59; LAB2 (Colorimeter), 0.69; and erythema (DSM II), 0.66. For hyperpigmentation, the highest correlations were melanin (Mexameter), 0.75; ITA (Colorimeter), -0.80; and melanin (DSM II), 0.83. This study shows that all three instruments can provide reliable color data on skin and scars with a single measurement. The authors also demonstrated that they can assist in objective skin type classification. For scar assessment, the most valid parameters in each instrument were identified.

  13. The computer treatment of remotely sensed data: An introduction to techniques which have geologic applications. [image enhancement and thematic classification in Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Paradella, W. R.; Vitorello, I.

    1982-01-01

    Several aspects of computer-assisted analysis techniques for image enhancement and thematic classification by which LANDSAT MSS imagery may be treated quantitatively are explained. On geological applications, computer processing of digital data allows, possibly, the fullest use of LANDSAT data, by displaying enhanced and corrected data for visual analysis and by evaluating and assigning each spectral pixel information to a given class.

  14. Parallel processing implementations of a contextual classifier for multispectral remote sensing data

    NASA Technical Reports Server (NTRS)

    Siegel, H. J.; Swain, P. H.; Smith, B. W.

    1980-01-01

    Contextual classifiers are being developed as a method to exploit the spatial/spectral context of a pixel to achieve accurate classification. Classification algorithms such as the contextual classifier typically require large amounts of computation time. One way to reduce the execution time of these tasks is through the use of parallelism. The applicability of the CDC flexible processor system and of a proposed multimicroprocessor system (PASM) for implementing contextual classifiers is examined.

  15. Evaluation of linear discriminant analysis for automated Raman histological mapping of esophageal high-grade dysplasia

    NASA Astrophysics Data System (ADS)

    Hutchings, Joanne; Kendall, Catherine; Shepherd, Neil; Barr, Hugh; Stone, Nicholas

    2010-11-01

    Rapid Raman mapping has the potential to be used for automated histopathology diagnosis, providing an adjunct technique to histology diagnosis. The aim of this work is to evaluate the feasibility of automated and objective pathology classification of Raman maps using linear discriminant analysis. Raman maps of esophageal tissue sections are acquired. Principal component (PC)-fed linear discriminant analysis (LDA) is carried out using subsets of the Raman map data (6483 spectra). An overall (validated) training classification model performance of 97.7% (sensitivity 95.0 to 100% and specificity 98.6 to 100%) is obtained. The remainder of the map spectra (131,672 spectra) are projected onto the classification model resulting in Raman images, demonstrating good correlation with contiguous hematoxylin and eosin (HE) sections. Initial results suggest that LDA has the potential to automate pathology diagnosis of esophageal Raman images, but since the classification of test spectra is forced into existing training groups, further work is required to optimize the training model. A small pixel size is advantageous for developing the training datasets using mapping data, despite lengthy mapping times, due to additional morphological information gained, and could facilitate differentiation of further tissue groups, such as the basal cells/lamina propria, in the future, but larger pixels sizes (and faster mapping) may be more feasible for clinical application.

  16. Towards the Optimal Pixel Size of dem for Automatic Mapping of Landslide Areas

    NASA Astrophysics Data System (ADS)

    Pawłuszek, K.; Borkowski, A.; Tarolli, P.

    2017-05-01

    Determining appropriate spatial resolution of digital elevation model (DEM) is a key step for effective landslide analysis based on remote sensing data. Several studies demonstrated that choosing the finest DEM resolution is not always the best solution. Various DEM resolutions can be applicable for diverse landslide applications. Thus, this study aims to assess the influence of special resolution on automatic landslide mapping. Pixel-based approach using parametric and non-parametric classification methods, namely feed forward neural network (FFNN) and maximum likelihood classification (ML), were applied in this study. Additionally, this allowed to determine the impact of used classification method for selection of DEM resolution. Landslide affected areas were mapped based on four DEMs generated at 1 m, 2 m, 5 m and 10 m spatial resolution from airborne laser scanning (ALS) data. The performance of the landslide mapping was then evaluated by applying landslide inventory map and computation of confusion matrix. The results of this study suggests that the finest scale of DEM is not always the best fit, however working at 1 m DEM resolution on micro-topography scale, can show different results. The best performance was found at 5 m DEM-resolution for FFNN and 1 m DEM resolution for results. The best performance was found to be using 5 m DEM-resolution for FFNN and 1 m DEM resolution for ML classification.

  17. Application of classification methods for mapping Mercury's surface composition: analysis on Rudaki's Area

    NASA Astrophysics Data System (ADS)

    Zambon, F.; De Sanctis, M. C.; Capaccioni, F.; Filacchione, G.; Carli, C.; Ammanito, E.; Friggeri, A.

    2011-10-01

    During the first two MESSENGER flybys (14th January 2008 and 6th October 2008) the Mercury Dual Imaging System (MDIS) has extended the coverage of the Mercury surface, obtained by Mariner 10 and now we have images of about 90% of the Mercury surface [1]. MDIS is equipped with a Narrow Angle Camera (NAC) and a Wide Angle Camera (WAC). The NAC uses an off-axis reflective design with a 1.5° field of view (FOV) centered at 747 nm. The WAC has a re- fractive design with a 10.5° FOV and 12-position filters that cover a 395-1040 nm spectral range [2]. The color images can be used to infer information on the surface composition and classification meth- ods are an interesting technique for multispectral image analysis which can be applied to the study of the planetary surfaces. Classification methods are based on clustering algorithms and they can be divided in two categories: unsupervised and supervised. The unsupervised classifiers do not require the analyst feedback, and the algorithm automatically organizes pixels values into classes. In the supervised method, instead, the analyst must choose the "training area" that define the pixels value of a given class [3]. Here we will describe the classification in different compositional units of the region near the Rudaki Crater on Mercury.

  18. Object-oriented feature extraction approach for mapping supraglacial debris in Schirmacher Oasis using very high-resolution satellite data

    NASA Astrophysics Data System (ADS)

    Jawak, Shridhar D.; Jadhav, Ajay; Luis, Alvarinho J.

    2016-05-01

    Supraglacial debris was mapped in the Schirmacher Oasis, east Antarctica, by using WorldView-2 (WV-2) high resolution optical remote sensing data consisting of 8-band calibrated Gram Schmidt (GS)-sharpened and atmospherically corrected WV-2 imagery. This study is a preliminary attempt to develop an object-oriented rule set to extract supraglacial debris for Antarctic region using 8-spectral band imagery. Supraglacial debris was manually digitized from the satellite imagery to generate the ground reference data. Several trials were performed using few existing traditional pixel-based classification techniques and color-texture based object-oriented classification methods to extract supraglacial debris over a small domain of the study area. Multi-level segmentation and attributes such as scale, shape, size, compactness along with spectral information from the data were used for developing the rule set. The quantitative analysis of error was carried out against the manually digitized reference data to test the practicability of our approach over the traditional pixel-based methods. Our results indicate that OBIA-based approach (overall accuracy: 93%) for extracting supraglacial debris performed better than all the traditional pixel-based methods (overall accuracy: 80-85%). The present attempt provides a comprehensive improved method for semiautomatic feature extraction in supraglacial environment and a new direction in the cryospheric research.

  19. Urban Density Indices Using Mean Shift-Based Upsampled Elevetion Data

    NASA Astrophysics Data System (ADS)

    Charou, E.; Gyftakis, S.; Bratsolis, E.; Tsenoglou, T.; Papadopoulou, Th. D.; Vassilas, N.

    2015-04-01

    Urban density is an important factor for several fields, e.g. urban design, planning and land management. Modern remote sensors deliver ample information for the estimation of specific urban land classification classes (2D indicators), and the height of urban land classification objects (3D indicators) within an Area of Interest (AOI). In this research, two of these indicators, Building Coverage Ratio (BCR) and Floor Area Ratio (FAR) are numerically and automatically derived from high-resolution airborne RGB orthophotos and LiDAR data. In the pre-processing step the low resolution elevation data are fused with the high resolution optical data through a mean-shift based discontinuity preserving smoothing algorithm. The outcome is an improved normalized digital surface model (nDSM) is an upsampled elevation data with considerable improvement regarding region filling and "straightness" of elevation discontinuities. In a following step, a Multilayer Feedforward Neural Network (MFNN) is used to classify all pixels of the AOI to building or non-building categories. For the total surface of the block and the buildings we consider the number of their pixels and the surface of the unit pixel. Comparisons of the automatically derived BCR and FAR indicators with manually derived ones shows the applicability and effectiveness of the methodology proposed.

  20. Semantic segmentation of mFISH images using convolutional networks.

    PubMed

    Pardo, Esteban; Morgado, José Mário T; Malpica, Norberto

    2018-04-30

    Multicolor in situ hybridization (mFISH) is a karyotyping technique used to detect major chromosomal alterations using fluorescent probes and imaging techniques. Manual interpretation of mFISH images is a time consuming step that can be automated using machine learning; in previous works, pixel or patch wise classification was employed, overlooking spatial information which can help identify chromosomes. In this work, we propose a fully convolutional semantic segmentation network for the interpretation of mFISH images, which uses both spatial and spectral information to classify each pixel in an end-to-end fashion. The semantic segmentation network developed was tested on samples extracted from a public dataset using cross validation. Despite having no labeling information of the image it was tested on, our algorithm yielded an average correct classification ratio (CCR) of 87.41%. Previously, this level of accuracy was only achieved with state of the art algorithms when classifying pixels from the same image in which the classifier has been trained. These results provide evidence that fully convolutional semantic segmentation networks may be employed in the computer aided diagnosis of genetic diseases with improved performance over the current image analysis methods. © 2018 International Society for Advancement of Cytometry. © 2018 International Society for Advancement of Cytometry.

  1. Automated Detection of Synapses in Serial Section Transmission Electron Microscopy Image Stacks

    PubMed Central

    Kreshuk, Anna; Koethe, Ullrich; Pax, Elizabeth; Bock, Davi D.; Hamprecht, Fred A.

    2014-01-01

    We describe a method for fully automated detection of chemical synapses in serial electron microscopy images with highly anisotropic axial and lateral resolution, such as images taken on transmission electron microscopes. Our pipeline starts from classification of the pixels based on 3D pixel features, which is followed by segmentation with an Ising model MRF and another classification step, based on object-level features. Classifiers are learned on sparse user labels; a fully annotated data subvolume is not required for training. The algorithm was validated on a set of 238 synapses in 20 serial 7197×7351 pixel images (4.5×4.5×45 nm resolution) of mouse visual cortex, manually labeled by three independent human annotators and additionally re-verified by an expert neuroscientist. The error rate of the algorithm (12% false negative, 7% false positive detections) is better than state-of-the-art, even though, unlike the state-of-the-art method, our algorithm does not require a prior segmentation of the image volume into cells. The software is based on the ilastik learning and segmentation toolkit and the vigra image processing library and is freely available on our website, along with the test data and gold standard annotations (http://www.ilastik.org/synapse-detection/sstem). PMID:24516550

  2. Non-Euclidean phasor analysis for quantification of oxidative stress in ex vivo human skin exposed to sun filters using fluorescence lifetime imaging microscopy

    NASA Astrophysics Data System (ADS)

    Osseiran, Sam; Roider, Elisabeth M.; Wang, Hequn; Suita, Yusuke; Murphy, Michael; Fisher, David E.; Evans, Conor L.

    2017-12-01

    Chemical sun filters are commonly used as active ingredients in sunscreens due to their efficient absorption of ultraviolet (UV) radiation. Yet, it is known that these compounds can photochemically react with UV light and generate reactive oxygen species and oxidative stress in vitro, though this has yet to be validated in vivo. One label-free approach to probe oxidative stress is to measure and compare the relative endogenous fluorescence generated by cellular coenzymes nicotinamide adenine dinucleotides and flavin adenine dinucleotides. However, chemical sun filters are fluorescent, with emissive properties that contaminate endogenous fluorescent signals. To accurately distinguish the source of fluorescence in ex vivo skin samples treated with chemical sun filters, fluorescence lifetime imaging microscopy data were processed on a pixel-by-pixel basis using a non-Euclidean separation algorithm based on Mahalanobis distance and validated on simulated data. Applying this method, ex vivo samples exhibited a small oxidative shift when exposed to sun filters alone, though this shift was much smaller than that imparted by UV irradiation. Given the need for investigative tools to further study the clinical impact of chemical sun filters in patients, the reported methodology may be applied to visualize chemical sun filters and measure oxidative stress in patients' skin.

  3. Robust skin color-based moving object detection for video surveillance

    NASA Astrophysics Data System (ADS)

    Kaliraj, Kalirajan; Manimaran, Sudha

    2016-07-01

    Robust skin color-based moving object detection for video surveillance is proposed. The objective of the proposed algorithm is to detect and track the target under complex situations. The proposed framework comprises four stages, which include preprocessing, skin color-based feature detection, feature classification, and target localization and tracking. In the preprocessing stage, the input image frame is smoothed using averaging filter and transformed into YCrCb color space. In skin color detection, skin color regions are detected using Otsu's method of global thresholding. In the feature classification, histograms of both skin and nonskin regions are constructed and the features are classified into foregrounds and backgrounds based on Bayesian skin color classifier. The foreground skin regions are localized by a connected component labeling process. Finally, the localized foreground skin regions are confirmed as a target by verifying the region properties, and nontarget regions are rejected using the Euler method. At last, the target is tracked by enclosing the bounding box around the target region in all video frames. The experiment was conducted on various publicly available data sets and the performance was evaluated with baseline methods. It evidently shows that the proposed algorithm works well against slowly varying illumination, target rotations, scaling, fast, and abrupt motion changes.

  4. Using Trained Pixel Classifiers to Select Images of Interest

    NASA Technical Reports Server (NTRS)

    Mazzoni, D.; Wagstaff, K.; Castano, R.

    2004-01-01

    We present a machine-learning-based approach to ranking images based on learned priorities. Unlike previous methods for image evaluation, which typically assess the value of each image based on the presence of predetermined specific features, this method involves using two levels of machine-learning classifiers: one level is used to classify each pixel as belonging to one of a group of rather generic classes, and another level is used to rank the images based on these pixel classifications, given some example rankings from a scientist as a guide. Initial results indicate that the technique works well, producing new rankings that match the scientist's rankings significantly better than would be expected by chance. The method is demonstrated for a set of images collected by a Mars field-test rover.

  5. Experimental study of digital image processing techniques for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.

    1976-01-01

    The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.

  6. Multiple Sparse Representations Classification

    PubMed Central

    Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik

    2015-01-01

    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and sparsity level. PMID:26177106

  7. Classification of Urban Feature from Unmanned Aerial Vehicle Images Using Gasvm Integration and Multi-Scale Segmentation

    NASA Astrophysics Data System (ADS)

    Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.

    2015-12-01

    The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.

  8. Subpixel target detection and enhancement in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Tiwari, K. C.; Arora, M.; Singh, D.

    2011-06-01

    Hyperspectral data due to its higher information content afforded by higher spectral resolution is increasingly being used for various remote sensing applications including information extraction at subpixel level. There is however usually a lack of matching fine spatial resolution data particularly for target detection applications. Thus, there always exists a tradeoff between the spectral and spatial resolutions due to considerations of type of application, its cost and other associated analytical and computational complexities. Typically whenever an object, either manmade, natural or any ground cover class (called target, endmembers, components or class) gets spectrally resolved but not spatially, mixed pixels in the image result. Thus, numerous manmade and/or natural disparate substances may occur inside such mixed pixels giving rise to mixed pixel classification or subpixel target detection problems. Various spectral unmixing models such as Linear Mixture Modeling (LMM) are in vogue to recover components of a mixed pixel. Spectral unmixing outputs both the endmember spectrum and their corresponding abundance fractions inside the pixel. It, however, does not provide spatial distribution of these abundance fractions within a pixel. This limits the applicability of hyperspectral data for subpixel target detection. In this paper, a new inverse Euclidean distance based super-resolution mapping method has been presented that achieves subpixel target detection in hyperspectral images by adjusting spatial distribution of abundance fraction within a pixel. Results obtained at different resolutions indicate that super-resolution mapping may effectively aid subpixel target detection.

  9. Lack of correlation between minimal erythema dose and skin phototype in a Colombian scholar population.

    PubMed

    Sanclemente, Gloria; Zapata, José-F; García, José-J; Gaviria, Angela; Gómez, Luis-F; Barrera, Marcela

    2008-11-01

    Sun exposure and skin phototype are the most relevant risk factors for skin cancer. Colombia has high levels of ultraviolet radiation during the whole year, therefore, both, high UVI's and outdoor worker's daily activities, in our country are very important risk factors for the development of cutaneous cancer. To date no study has evaluated the usefulness of Fitzpatrick's skin phototype classification in Colombians and its correlation with the minimal erythema dose (MED) and constitutional skin color. Such information is gaining importance in other nations due to the fact that several country's population is becoming more ethnically diverse. To determine the skin phototype, accumulated sun exposure, sun protection behavior, MED and phenotype in a Colombian school population. Last year high school students from the western Antioquia were invited to participate by phone and letter through their respective school directors. A self-questionnaire was handled to each student. A representative sample of the universe was selected for a medical examination by a dermatologist in order to validate the results of the self-questionnaire. The constitutional skin color was determined with the chromameter CR 300 Minolta. The MED was defined as the minimal dose of UVB being able to induce erythema 24 h later. Eight schools of the area agreed to participate in the study, and a total of 911 students (58% girls and 42% boys) filled-out the self-questionnaire. Sun exposure in the majority of individuals was in a level between moderate and very high. Ninety percent of students do not use any sun protection device or cream. Only a 50% of concordance between self-assessed skin phototype vs. medical skin phototype was found, and the highest concordance corresponded to skin phototype II (82%). There was a marked difference in skin photosensitivity of Colombians compared with reports in Caucasians. We observed a marked overlapping in MED's and L* values in phototypes II and III. The Fitzpatrick's classification was not useful in Hispanic populations such as ours. Therefore, a new skin-phototype classification system is required. In our population the constitutional color was a good predictor of the MED but it did not correlate with skin phototype. The self-assessed questionnaire method was not useful to determine skin cancer risk in our population. The majority of this population has light skin phototypes and is highly exposed to solar UV radiation without proper protection.

  10. Automatic Building Detection based on Supervised Classification using High Resolution Google Earth Images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, S.; Ghaffarian, S.

    2014-08-01

    This paper presents a novel approach to detect the buildings by automization of the training area collecting stage for supervised classification. The method based on the fact that a 3d building structure should cast a shadow under suitable imaging conditions. Therefore, the methodology begins with the detection and masking out the shadow areas using luminance component of the LAB color space, which indicates the lightness of the image, and a novel double thresholding technique. Further, the training areas for supervised classification are selected by automatically determining a buffer zone on each building whose shadow is detected by using the shadow shape and the sun illumination direction. Thereafter, by calculating the statistic values of each buffer zone which is collected from the building areas the Improved Parallelepiped Supervised Classification is executed to detect the buildings. Standard deviation thresholding applied to the Parallelepiped classification method to improve its accuracy. Finally, simple morphological operations conducted for releasing the noises and increasing the accuracy of the results. The experiments were performed on set of high resolution Google Earth images. The performance of the proposed approach was assessed by comparing the results of the proposed approach with the reference data by using well-known quality measurements (Precision, Recall and F1-score) to evaluate the pixel-based and object-based performances of the proposed approach. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.4 % and 853 % overall pixel-based and object-based precision performances, respectively.

  11. The additional benefit of the ML Flow test to classify leprosy patients.

    PubMed

    Bührer-Sékula, Samira; Illarramendi, Ximena; Teles, Rose B; Penna, Maria Lucia F; Nery, José Augusto C; Sales, Anna Maria; Oskam, Linda; Sampaio, Elizabeth P; Sarno, Euzenir N

    2009-08-01

    The use of the skin lesion counting classification leads to both under and over diagnosis of leprosy in many instances. Thus, there is a need to complement this classification with another simple and robust test for use in the field. Data of 202 untreated leprosy patients diagnosed at FIOCRUZ, Rio de Janeiro, Brazil, was analyzed. There were 90 patients classified as PB and 112 classified as MB according to the reference standard. The BI was positive in 111 (55%) patients and the ML Flow test in 116 (57.4%) patients. The ML Flow test was positive in 95 (86%) of the patients with a positive BI. The lesion counting classification was confirmed by both BI and ML Flow tests in 65% of the 92 patients with 5 or fewer lesions, and in 76% of the 110 patients with 6 or more lesions. The combination of skin lesion counting and the ML Flow test results yielded a sensitivity of 85% and a specificity of 87% for MB classification, and correctly classified 86% of the patients when compared to the standard reference. A considerable proportion of the patients (43.5%) with discordant test results in relation to standard classification was in reaction. The use of any classification system has limitations, especially those that oversimplify a complex disease such as leprosy. In the absence of an experienced dermatologist and slit skin smear, the ML Flow test could be used to improve treatment decisions in field conditions.

  12. Ultrahigh Detective Heterogeneous Photosensor Arrays with In-Pixel Signal Boosting Capability for Large-Area and Skin-Compatible Electronics.

    PubMed

    Kim, Jaehyun; Kim, Jaekyun; Jo, Sangho; Kang, Jingu; Jo, Jeong-Wan; Lee, Myungwon; Moon, Juhyuk; Yang, Lin; Kim, Myung-Gil; Kim, Yong-Hoon; Park, Sung Kyu

    2016-04-01

    An ultra-thin and large-area skin-compatible heterogeneous organic/metal-oxide photosensor array is demonstrated which is capable of sensing and boosting signals with high detectivity and signal-to-noise ratio. For the realization of ultra-flexible and high-sensitive heterogeneous photosensor arrays on a polyimide substrate having organic sensor arrays and metal-oxide boosting circuitry, solution-processing and room-temperature alternating photochemical conversion routes are applied. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Comparison of Random Forest and Support Vector Machine classifiers using UAV remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Piragnolo, Marco; Masiero, Andrea; Pirotti, Francesco

    2017-04-01

    Since recent years surveying with unmanned aerial vehicles (UAV) is getting a great amount of attention due to decreasing costs, higher precision and flexibility of usage. UAVs have been applied for geomorphological investigations, forestry, precision agriculture, cultural heritage assessment and for archaeological purposes. It can be used for land use and land cover classification (LULC). In literature, there are two main types of approaches for classification of remote sensing imagery: pixel-based and object-based. On one hand, pixel-based approach mostly uses training areas to define classes and respective spectral signatures. On the other hand, object-based classification considers pixels, scale, spatial information and texture information for creating homogeneous objects. Machine learning methods have been applied successfully for classification, and their use is increasing due to the availability of faster computing capabilities. The methods learn and train the model from previous computation. Two machine learning methods which have given good results in previous investigations are Random Forest (RF) and Support Vector Machine (SVM). The goal of this work is to compare RF and SVM methods for classifying LULC using images collected with a fixed wing UAV. The processing chain regarding classification uses packages in R, an open source scripting language for data analysis, which provides all necessary algorithms. The imagery was acquired and processed in November 2015 with cameras providing information over the red, blue, green and near infrared wavelength reflectivity over a testing area in the campus of Agripolis, in Italy. Images were elaborated and ortho-rectified through Agisoft Photoscan. The ortho-rectified image is the full data set, and the test set is derived from partial sub-setting of the full data set. Different tests have been carried out, using a percentage from 2 % to 20 % of the total. Ten training sets and ten validation sets are obtained from each test set. The control dataset consist of an independent visual classification done by an expert over the whole area. The classes are (i) broadleaf, (ii) building, (iii) grass, (iv) headland access path, (v) road, (vi) sowed land, (vii) vegetable. The RF and SVM are applied to the test set. The performances of the methods are evaluated using the three following accuracy metrics: Kappa index, Classification accuracy and Classification Error. All three are calculated in three different ways: with K-fold cross validation, using the validation test set and using the full test set. The analysis indicates that SVM gets better results in terms of good scores using K-fold cross or validation test set. Using the full test set, RF achieves a better result in comparison to SVM. It also seems that SVM performs better with smaller training sets, whereas RF performs better as training sets get larger.

  14. Recognition of skin melanoma through dermoscopic image analysis

    NASA Astrophysics Data System (ADS)

    Gómez, Catalina; Herrera, Diana Sofia

    2017-11-01

    Melanoma skin cancer diagnosis can be challenging due to the similarities of the early stage symptoms with regular moles. Standardized visual parameters can be determined and characterized to suspect a melanoma cancer type. The automation of this diagnosis could have an impact in the medical field by providing a tool to support the specialists with high accuracy. The objective of this study is to develop an algorithm trained to distinguish a highly probable melanoma from a non-dangerous mole by the segmentation and classification of dermoscopic mole images. We evaluate our approach on the dataset provided by the International Skin Imaging Collaboration used in the International Challenge Skin Lesion Analysis Towards Melanoma Detection. For the segmentation task, we apply a preprocessing algorithm and use Otsu's thresholding in the best performing color space; the average Jaccard Index in the test dataset is 70.05%. For the subsequent classification stage, we use joint histograms in the YCbCr color space, a RBF Gaussian SVM trained with five features concerning circularity and irregularity of the segmented lesion, and the Gray Level Co-occurrence matrix features for texture analysis. These features are combined to obtain an Average Classification Accuracy of 63.3% in the test dataset.

  15. Automatic Classification of Specific Melanocytic Lesions Using Artificial Intelligence

    PubMed Central

    Jaworek-Korjakowska, Joanna; Kłeczek, Paweł

    2016-01-01

    Background. Given its propensity to metastasize, and lack of effective therapies for most patients with advanced disease, early detection of melanoma is a clinical imperative. Different computer-aided diagnosis (CAD) systems have been proposed to increase the specificity and sensitivity of melanoma detection. Although such computer programs are developed for different diagnostic algorithms, to the best of our knowledge, a system to classify different melanocytic lesions has not been proposed yet. Method. In this research we present a new approach to the classification of melanocytic lesions. This work is focused not only on categorization of skin lesions as benign or malignant but also on specifying the exact type of a skin lesion including melanoma, Clark nevus, Spitz/Reed nevus, and blue nevus. The proposed automatic algorithm contains the following steps: image enhancement, lesion segmentation, feature extraction, and selection as well as classification. Results. The algorithm has been tested on 300 dermoscopic images and achieved accuracy of 92% indicating that the proposed approach classified most of the melanocytic lesions correctly. Conclusions. A proposed system can not only help to precisely diagnose the type of the skin mole but also decrease the amount of biopsies and reduce the morbidity related to skin lesion excision. PMID:26885520

  16. Automatic Classification of Specific Melanocytic Lesions Using Artificial Intelligence.

    PubMed

    Jaworek-Korjakowska, Joanna; Kłeczek, Paweł

    2016-01-01

    Given its propensity to metastasize, and lack of effective therapies for most patients with advanced disease, early detection of melanoma is a clinical imperative. Different computer-aided diagnosis (CAD) systems have been proposed to increase the specificity and sensitivity of melanoma detection. Although such computer programs are developed for different diagnostic algorithms, to the best of our knowledge, a system to classify different melanocytic lesions has not been proposed yet. In this research we present a new approach to the classification of melanocytic lesions. This work is focused not only on categorization of skin lesions as benign or malignant but also on specifying the exact type of a skin lesion including melanoma, Clark nevus, Spitz/Reed nevus, and blue nevus. The proposed automatic algorithm contains the following steps: image enhancement, lesion segmentation, feature extraction, and selection as well as classification. The algorithm has been tested on 300 dermoscopic images and achieved accuracy of 92% indicating that the proposed approach classified most of the melanocytic lesions correctly. A proposed system can not only help to precisely diagnose the type of the skin mole but also decrease the amount of biopsies and reduce the morbidity related to skin lesion excision.

  17. Various new applications of fiber optic infrared Fourier transform spectroscopy for dermatology

    NASA Astrophysics Data System (ADS)

    Bruch, Reinhard F.; Afanasyeva, Natalia I.; Sukuta, Sydney; Brooks, Angelique L.; Makhine, Volodymyr; Kolyakov, Sergei F.

    1999-02-01

    Fiberoptical evanescent wave Fourier transform infrared (FEW- FTIR) spectroscopy has been applied in the middle infrared (MIR) wavelength range (3 to 20 micrometer) to the in vivo diagnostics of normal skin tissue, acupuncture points as well as precancerous and cancerous conditions. The FTIR-FEW technique, using nontoxic unclad fibers, is suitable for noninvasive, sensitive investigations of skin tissue for various dermatological studies of skin caner, aging, laser treatment, cosmetics, skin allergies, etc. This method is direct, nondestructive, and fast (seconds). Our optical fibers are nonhygroscopic, flexible, and characterized by extremely low losses. In this study, we have noninvasively investigated more than 300 cases of normal skin, acupuncture points, precancerous and cancerous tissue in the range of 1400 to 1800 cm-1. The results of our analysis of skin and other tissue are discussed in terms of structural and mathematical similarities and differences on a molecular level. In addition, we have also performed cluster analysis, using principal component scores, to confirm pathological classifications and to discriminate between genders. We have found good agreement with prior pathological classifications for normal skin tissue and melanoma tumors and normal females were distinctly separate from males.

  18. A robust sebum, oil, and particulate pollution model for assessing cleansing efficacy of human skin.

    PubMed

    Peterson, G; Rapaka, S; Koski, N; Kearney, M; Ortblad, K; Tadlock, L

    2017-06-01

    With increasing concerns over the rise of atmospheric particulate pollution globally and its impact on systemic health and skin ageing, we have developed a pollution model to mimic particulate matter trapped in sebum and oils creating a robust (difficult to remove) surrogate for dirty, polluted skin. To evaluate the cleansing efficacy/protective effect of a sonic brush vs. manual cleansing against particulate pollution (trapped in grease/oil typical of human sebum). The pollution model (Sebollution; sebum pollution model; SPM) consists of atmospheric particulate matter/pollution combined with grease/oils typical of human sebum. Twenty subjects between the ages of 18-65 were enrolled in a single-centre, cleansing study comparisons between the sonic cleansing brush (normal speed) compared to manual cleansing. Equal amount of SPM was applied to the centre of each cheek (left and right). Method of cleansing (sonic vs. manual) was randomized to the side of the face (left or right) for each subject. Each side was cleansed for five-seconds using the sonic cleansing device with sensitive brush head or manually, using equal amounts of water and a gel cleanser. Photographs (VISIA-CR, Canfield Imaging, NJ, USA) were taken at baseline (before application of the SPM), after application of SPM (pre-cleansing), and following cleansing. Image analysis (ImageJ, NIH, Bethesda, MD, USA) was used to quantify colour intensity (amount of particulate pollutants on the skin) using a scale of 0 to 255 (0 = all black pixels; 255 = all white pixels). Differences between the baseline and post-cleansing values (pixels) are reported as the amount of SPM remaining following each method of cleansing. Using a robust cleansing protocol to assess removal of pollutants (SPM; atmospheric particulate matter trapped in grease/oil), the sonic brush removed significantly more SPM than manual cleansing (P < 0.001). While extreme in colour, this pollution method easily allows assessment of efficacy through image analysis. © 2016 The Authors. International Journal of Cosmetic Science published by John Wiley & Sons Ltd on behalf of Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  19. Segmentation of white blood cells and comparison of cell morphology by linear and naïve Bayes classifiers.

    PubMed

    Prinyakupt, Jaroonrut; Pluempitiwiriyawej, Charnchai

    2015-06-30

    Blood smear microscopic images are routinely investigated by haematologists to diagnose most blood diseases. However, the task is quite tedious and time consuming. An automatic detection and classification of white blood cells within such images can accelerate the process tremendously. In this paper we propose a system to locate white blood cells within microscopic blood smear images, segment them into nucleus and cytoplasm regions, extract suitable features and finally, classify them into five types: basophil, eosinophil, neutrophil, lymphocyte and monocyte. Two sets of blood smear images were used in this study's experiments. Dataset 1, collected from Rangsit University, were normal peripheral blood slides under light microscope with 100× magnification; 555 images with 601 white blood cells were captured by a Nikon DS-Fi2 high-definition color camera and saved in JPG format of size 960 × 1,280 pixels at 15 pixels per 1 μm resolution. In dataset 2, 477 cropped white blood cell images were downloaded from CellaVision.com. They are in JPG format of size 360 × 363 pixels. The resolution is estimated to be 10 pixels per 1 μm. The proposed system comprises a pre-processing step, nucleus segmentation, cell segmentation, feature extraction, feature selection and classification. The main concept of the segmentation algorithm employed uses white blood cell's morphological properties and the calibrated size of a real cell relative to image resolution. The segmentation process combined thresholding, morphological operation and ellipse curve fitting. Consequently, several features were extracted from the segmented nucleus and cytoplasm regions. Prominent features were then chosen by a greedy search algorithm called sequential forward selection. Finally, with a set of selected prominent features, both linear and naïve Bayes classifiers were applied for performance comparison. This system was tested on normal peripheral blood smear slide images from two datasets. Two sets of comparison were performed: segmentation and classification. The automatically segmented results were compared to the ones obtained manually by a haematologist. It was found that the proposed method is consistent and coherent in both datasets, with dice similarity of 98.9 and 91.6% for average segmented nucleus and cell regions, respectively. Furthermore, the overall correction rate in the classification phase is about 98 and 94% for linear and naïve Bayes models, respectively. The proposed system, based on normal white blood cell morphology and its characteristics, was applied to two different datasets. The results of the calibrated segmentation process on both datasets are fast, robust, efficient and coherent. Meanwhile, the classification of normal white blood cells into five types shows high sensitivity in both linear and naïve Bayes models, with slightly better results in the linear classifier.

  20. Spatial assessment of intertidal seagrass meadows using optical imaging systems and a lightweight drone

    NASA Astrophysics Data System (ADS)

    Duffy, James P.; Pratt, Laura; Anderson, Karen; Land, Peter E.; Shutler, Jamie D.

    2018-01-01

    Seagrass ecosystems are highly sensitive to environmental change. They are also in global decline and under threat from a variety of anthropogenic factors. There is now an urgency to establish robust monitoring methodologies so that changes in seagrass abundance and distribution in these sensitive coastal environments can be understood. Typical monitoring approaches have included remote sensing from satellites and airborne platforms, ground based ecological surveys and snorkel/scuba surveys. These techniques can suffer from temporal and spatial inconsistency, or are very localised making it hard to assess seagrass meadows in a structured manner. Here we present a novel technique using a lightweight (sub 7 kg) drone and consumer grade cameras to produce very high spatial resolution (∼4 mm pixel-1) mosaics of two intertidal sites in Wales, UK. We present a full data collection methodology followed by a selection of classification techniques to produce coverage estimates at each site. We trialled three classification approaches of varying complexity to investigate and illustrate the differing performance and capabilities of each. Our results show that unsupervised classifications perform better than object-based methods in classifying seagrass cover. We also found that the more sparsely vegetated of the two meadows studied was more accurately classified - it had lower root mean squared deviation (RMSD) between observed and classified coverage (9-9.5%) compared to a more densely vegetated meadow (RMSD 16-22%). Furthermore, we examine the potential to detect other biotic features, finding that lugworm mounds can be detected visually at coarser resolutions such as 43 mm pixel-1, whereas smaller features such as cockle shells within seagrass require finer grained data (<17 mm pixel-1).

  1. An embedded face-classification system for infrared images on an FPGA

    NASA Astrophysics Data System (ADS)

    Soto, Javier E.; Figueroa, Miguel

    2014-10-01

    We present a face-classification architecture for long-wave infrared (IR) images implemented on a Field Programmable Gate Array (FPGA). The circuit is fast, compact and low power, can recognize faces in real time and be embedded in a larger image-processing and computer vision system operating locally on an IR camera. The algorithm uses Local Binary Patterns (LBP) to perform feature extraction on each IR image. First, each pixel in the image is represented as an LBP pattern that encodes the similarity between the pixel and its neighbors. Uniform LBP codes are then used to reduce the number of patterns to 59 while preserving more than 90% of the information contained in the original LBP representation. Then, the image is divided into 64 non-overlapping regions, and each region is represented as a 59-bin histogram of patterns. Finally, the algorithm concatenates all 64 regions to create a 3,776-bin spatially enhanced histogram. We reduce the dimensionality of this histogram using Linear Discriminant Analysis (LDA), which improves clustering and enables us to store an entire database of 53 subjects on-chip. During classification, the circuit applies LBP and LDA to each incoming IR image in real time, and compares the resulting feature vector to each pattern stored in the local database using the Manhattan distance. We implemented the circuit on a Xilinx Artix-7 XC7A100T FPGA and tested it with the UCHThermalFace database, which consists of 28 81 x 150-pixel images of 53 subjects in indoor and outdoor conditions. The circuit achieves a 98.6% hit ratio, trained with 16 images and tested with 12 images of each subject in the database. Using a 100 MHz clock, the circuit classifies 8,230 images per second, and consumes only 309mW.

  2. SUVI Thematic Maps: A new tool for space weather forecasting

    NASA Astrophysics Data System (ADS)

    Hughes, J. M.; Seaton, D. B.; Darnel, J.

    2017-12-01

    The new Solar Ultraviolet Imager (SUVI) instruments aboard NOAA's GOES-R series satellites collect continuous, high-quality imagery of the Sun in six wavelengths. SUVI imagers produce at least one image every 10 seconds, or 8,640 images per day, considerably more data than observers can digest in real time. Over the projected 20-year lifetime of the four GOES-R series spacecraft, SUVI will provide critical imagery for space weather forecasters and produce an extensive but unwieldy archive. In order to condense the database into a dynamic and searchable form we have developed solar thematic maps, maps of the Sun with key features, such as coronal holes, flares, bright regions, quiet corona, and filaments, identified. Thematic maps will be used in NOAA's Space Weather Prediction Center to improve forecaster response time to solar events and generate several derivative products. Likewise, scientists use thematic maps to find observations of interest more easily. Using an expert-trained, naive Bayesian classifier to label each pixel, we create thematic maps in real-time. We created software to collect expert classifications of solar features based on SUVI images. Using this software, we compiled a database of expert classifications, from which we could characterize the distribution of pixels associated with each theme. Given new images, the classifier assigns each pixel the most appropriate label according to the trained distribution. Here we describe the software to collect expert training and the successes and limitations of the classifier. The algorithm excellently identifies coronal holes but fails to consistently detect filaments and prominences. We compare the Bayesian classifier to an artificial neural network, one of our attempts to overcome the aforementioned limitations. These results are very promising and encourage future research into an ensemble classification approach.

  3. Classification of Hyperspectral or Trichromatic Measurements of Ocean Color Data into Spectral Classes.

    PubMed

    Prasad, Dilip K; Agarwal, Krishna

    2016-03-22

    We propose a method for classifying radiometric oceanic color data measured by hyperspectral satellite sensors into known spectral classes, irrespective of the downwelling irradiance of the particular day, i.e., the illumination conditions. The focus is not on retrieving the inherent optical properties but to classify the pixels according to the known spectral classes of the reflectances from the ocean. The method compensates for the unknown downwelling irradiance by white balancing the radiometric data at the ocean pixels using the radiometric data of bright pixels (typically from clouds). The white-balanced data is compared with the entries in a pre-calibrated lookup table in which each entry represents the spectral properties of one class. The proposed approach is tested on two datasets of in situ measurements and 26 different daylight illumination spectra for medium resolution imaging spectrometer (MERIS), moderate-resolution imaging spectroradiometer (MODIS), sea-viewing wide field-of-view sensor (SeaWiFS), coastal zone color scanner (CZCS), ocean and land colour instrument (OLCI), and visible infrared imaging radiometer suite (VIIRS) sensors. Results are also shown for CIMEL's SeaPRISM sun photometer sensor used on-board field trips. Accuracy of more than 92% is observed on the validation dataset and more than 86% is observed on the other dataset for all satellite sensors. The potential of applying the algorithms to non-satellite and non-multi-spectral sensors mountable on airborne systems is demonstrated by showing classification results for two consumer cameras. Classification on actual MERIS data is also shown. Additional results comparing the spectra of remote sensing reflectance with level 2 MERIS data and chlorophyll concentration estimates of the data are included.

  4. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  5. A Review of the Quantification and Classification of Pigmented Skin Lesions: From Dedicated to Hand-Held Devices.

    PubMed

    Filho, Mercedes; Ma, Zhen; Tavares, João Manuel R S

    2015-11-01

    In recent years, the incidence of skin cancer cases has risen, worldwide, mainly due to the prolonged exposure to harmful ultraviolet radiation. Concurrently, the computer-assisted medical diagnosis of skin cancer has undergone major advances, through an improvement in the instrument and detection technology, and the development of algorithms to process the information. Moreover, because there has been an increased need to store medical data, for monitoring, comparative and assisted-learning purposes, algorithms for data processing and storage have also become more efficient in handling the increase of data. In addition, the potential use of common mobile devices to register high-resolution images of skin lesions has also fueled the need to create real-time processing algorithms that may provide a likelihood for the development of malignancy. This last possibility allows even non-specialists to monitor and follow-up suspected skin cancer cases. In this review, we present the major steps in the pre-processing, processing and post-processing of skin lesion images, with a particular emphasis on the quantification and classification of pigmented skin lesions. We further review and outline the future challenges for the creation of minimum-feature, automated and real-time algorithms for the detection of skin cancer from images acquired via common mobile devices.

  6. Optical characterization of murine model's in-vivo skin using Mueller matrix polarimetric imaging

    NASA Astrophysics Data System (ADS)

    Mora-Núñez, Azael; Martinez-Ponce, Geminiano; Garcia-Torales, Guillermo

    2015-12-01

    Mueller matrix polarimetric imaging (MMPI) provides a complete characterization of an anisotropic optical medium. Subsequent single value decomposition allows image interpretation in terms of basic optical anisotropies, such as depolarization, diattenuation, and retardance. In this work, healthy in-vivo skin at different anatomical locations of a biological model (Rattus norvegicus) was imaged by the MMPI technique using 532nm coherent illumination. The body parts under study were back, abdomen, tail, and calvaria. Because skin components are randomly distributed and skin thickness depends on its location, polarization measures arise from the average over a single detection element (pixel) and on the number of free optical paths, respectively. Optical anisotropies over the imaged skin indicates, mainly, the presence of components related to the physiology of the explored region. In addition, a MMPI-based comparison between a tumor on the back of one test subject and proximal healthy skin was made. The results show that the single values of optical anisotropies can be helpful in distinguishing different areas of in-vivo skin and also lesions.

  7. Mapping of land cover in northern California with simulated hyperspectral satellite imagery

    NASA Astrophysics Data System (ADS)

    Clark, Matthew L.; Kilham, Nina E.

    2016-09-01

    Land-cover maps are important science products needed for natural resource and ecosystem service management, biodiversity conservation planning, and assessing human-induced and natural drivers of land change. Analysis of hyperspectral, or imaging spectrometer, imagery has shown an impressive capacity to map a wide range of natural and anthropogenic land cover. Applications have been mostly with single-date imagery from relatively small spatial extents. Future hyperspectral satellites will provide imagery at greater spatial and temporal scales, and there is a need to assess techniques for mapping land cover with these data. Here we used simulated multi-temporal HyspIRI satellite imagery over a 30,000 km2 area in the San Francisco Bay Area, California to assess its capabilities for mapping classes defined by the international Land Cover Classification System (LCCS). We employed a mapping methodology and analysis framework that is applicable to regional and global scales. We used the Random Forests classifier with three sets of predictor variables (reflectance, MNF, hyperspectral metrics), two temporal resolutions (summer, spring-summer-fall), two sample scales (pixel, polygon) and two levels of classification complexity (12, 20 classes). Hyperspectral metrics provided a 16.4-21.8% and 3.1-6.7% increase in overall accuracy relative to MNF and reflectance bands, respectively, depending on pixel or polygon scales of analysis. Multi-temporal metrics improved overall accuracy by 0.9-3.1% over summer metrics, yet increases were only significant at the pixel scale of analysis. Overall accuracy at pixel scales was 72.2% (Kappa 0.70) with three seasons of metrics. Anthropogenic and homogenous natural vegetation classes had relatively high confidence and producer and user accuracies were over 70%; in comparison, woodland and forest classes had considerable confusion. We next focused on plant functional types with relatively pure spectra by removing open-canopy shrublands, woodlands and mixed forests from the classification. This 12-class map had significantly improved accuracy of 85.1% (Kappa 0.83) and most classes had over 70% producer and user accuracies. Finally, we summarized important metrics from the multi-temporal Random Forests to infer the underlying chemical and structural properties that best discriminated our land-cover classes across seasons.

  8. Evaluation of different shadow detection and restoration methods and their impact on vegetation indices using UAV high-resolution imageries over vineyards

    NASA Astrophysics Data System (ADS)

    Aboutalebi, M.; Torres-Rua, A. F.; McKee, M.; Kustas, W. P.; Nieto, H.

    2017-12-01

    Shadows are an unavoidable component of high-resolution imagery. Although shadows can be a useful source of information about terrestrial features, they are a hindrance for image processing and lead to misclassification errors and increased uncertainty in defining surface reflectance properties. In precision agriculture activities, shadows may affect the performance of vegetation indices at pixel and plant scales. Thus, it becomes necessary to evaluate existing shadow detection and restoration methods, especially for applications that makes direct use of pixel information to estimate vegetation biomass, leaf area index (LAI), plant water use and stress, chlorophyll content, just to name a few. In this study, four high-resolution imageries captured by the Utah State University - AggieAir Unmanned Aerial Vehicle (UAV) system flown in 2014, 2015, and 2016 over a commercial vineyard located in the California for the USDA-Agricultural Research Service Grape Remote sensing Atmospheric Profile and Evapotranspiration Experiment (GRAPEX) Program are used for shadow detection and restoration. Four different methods for shadow detection are compared: (1) unsupervised classification, (2) supervised classification, (3) index-based method, and (4) physically-based method. Also, two different shadow restoration methods are evaluated: (1) linear correlation correction, and (2) gamma correction. The models' performance is evaluated over two vegetation indices: normalized difference vegetation index (NDVI) and LAI for both sunlit and shadowed pixels. Histogram and analysis of variance (ANOVA) are used as performance indicators. Results indicated that the performance of the supervised classification and the index-based method are better than other methods. In addition, there is a statistical difference between the average of NDVI and LAI on the sunlit and shadowed pixels. Among the shadow restoration methods, gamma correction visually works better than the linear correlation correction. Moreover, the statistical difference between sunlit and shadowed NDVI and LAI decreases after the application of the gamma restoration method. Potential effects of shadows on modeling surface energy balance and evapotranspiration using very high resolution UAV imagery over the GRAPEX vineyard will be discussed.

  9. A PIXEL COMPOSITION-BASED REFERENCE DATA SET FOR THEMATIC ACCURACY ASSESSMENT

    EPA Science Inventory

    Developing reference data sets for accuracy assessment of land-cover classifications derived from coarse spatial resolution sensors such as MODIS can be difficult due to the large resolution differences between the image data and available reference data sources. Ideally, the spa...

  10. MKID digital readout tuning with deep learning

    NASA Astrophysics Data System (ADS)

    Dodkins, R.; Mahashabde, S.; O'Brien, K.; Thatte, N.; Fruitwala, N.; Walter, A. B.; Meeker, S. R.; Szypryt, P.; Mazin, B. A.

    2018-04-01

    Microwave Kinetic Inductance Detector (MKID) devices offer inherent spectral resolution, simultaneous read out of thousands of pixels, and photon-limited sensitivity at optical wavelengths. Before taking observations the readout power and frequency of each pixel must be individually tuned, and if the equilibrium state of the pixels change, then the readout must be retuned. This process has previously been performed through manual inspection, and typically takes one hour per 500 resonators (20 h for a ten-kilo-pixel array). We present an algorithm based on a deep convolution neural network (CNN) architecture to determine the optimal bias power for each resonator. The bias point classifications from this CNN model, and those from alternative automated methods, are compared to those from human decisions, and the accuracy of each method is assessed. On a test feed-line dataset, the CNN achieves an accuracy of 90% within 1 dB of the designated optimal value, which is equivalent accuracy to a randomly selected human operator, and superior to the highest scoring alternative automated method by 10%. On a full ten-kilopixel array, the CNN performs the characterization in a matter of minutes - paving the way for future mega-pixel MKID arrays.

  11. Classification of high dimensional multispectral image data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1993-01-01

    A method for classifying high dimensional remote sensing data is described. The technique uses a radiometric adjustment to allow a human operator to identify and label training pixels by visually comparing the remotely sensed spectra to laboratory reflectance spectra. Training pixels for material without obvious spectral features are identified by traditional means. Features which are effective for discriminating between the classes are then derived from the original radiance data and used to classify the scene. This technique is applied to Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data taken over Cuprite, Nevada in 1992, and the results are compared to an existing geologic map. This technique performed well even with noisy data and the fact that some of the materials in the scene lack absorption features. No adjustment for the atmosphere or other scene variables was made to the data classified. While the experimental results compare favorably with an existing geologic map, the primary purpose of this research was to demonstrate the classification method, as compared to the geology of the Cuprite scene.

  12. Classifying multispectral data by neural networks

    NASA Technical Reports Server (NTRS)

    Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.

    1993-01-01

    Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.

  13. Deep learning based classification for head and neck cancer detection with hyperspectral imaging in an animal model

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Lu, Guolan; Wang, Dongsheng; Wang, Xu; Chen, Zhuo Georgia; Muller, Susan; Chen, Amy; Fei, Baowei

    2017-03-01

    Hyperspectral imaging (HSI) is an emerging imaging modality that can provide a noninvasive tool for cancer detection and image-guided surgery. HSI acquires high-resolution images at hundreds of spectral bands, providing big data to differentiating different types of tissue. We proposed a deep learning based method for the detection of head and neck cancer with hyperspectral images. Since the deep learning algorithm can learn the feature hierarchically, the learned features are more discriminative and concise than the handcrafted features. In this study, we adopt convolutional neural networks (CNN) to learn the deep feature of pixels for classifying each pixel into tumor or normal tissue. We evaluated our proposed classification method on the dataset containing hyperspectral images from 12 tumor-bearing mice. Experimental results show that our method achieved an average accuracy of 91.36%. The preliminary study demonstrated that our deep learning method can be applied to hyperspectral images for detecting head and neck tumors in animal models.

  14. Classification of simulated and actual NOAA-6 AVHRR data for hydrologic land-surface feature definition. [Advanced Very High Resolution Radiometer

    NASA Technical Reports Server (NTRS)

    Ormsby, J. P.

    1982-01-01

    An examination of the possibilities of using Landsat data to simulate NOAA-6 Advanced Very High Resolution Radiometer (AVHRR) data on two channels, as well as using actual NOAA-6 imagery, for large-scale hydrological studies is presented. A running average was obtained of 18 consecutive pixels of 1 km resolution taken by the Landsat scanners were scaled up to 8-bit data and investigated for different gray levels. AVHRR data comprising five channels of 10-bit, band-interleaved information covering 10 deg latitude were analyzed and a suitable pixel grid was chosen for comparison with the Landsat data in a supervised classification format, an unsupervised mode, and with ground truth. Landcover delineation was explored by removing snow, water, and cloud features from the cluster analysis, and resulted in less than 10% difference. Low resolution large-scale data was determined useful for characterizing some landcover features if weekly and/or monthly updates are maintained.

  15. Classification by diagnosing all absorption features (CDAF) for the most abundant minerals in airborne hyperspectral images

    NASA Astrophysics Data System (ADS)

    Mobasheri, Mohammad Reza; Ghamary-Asl, Mohsen

    2011-12-01

    Imaging through hyperspectral technology is a powerful tool that can be used to spectrally identify and spatially map materials based on their specific absorption characteristics in electromagnetic spectrum. A robust method called Tetracorder has shown its effectiveness at material identification and mapping, using a set of algorithms within an expert system decision-making framework. In this study, using some stages of Tetracorder, a technique called classification by diagnosing all absorption features (CDAF) is introduced. This technique enables one to assign a class to the most abundant mineral in each pixel with high accuracy. The technique is based on the derivation of information from reflectance spectra of the image. This can be done through extraction of spectral absorption features of any minerals from their respected laboratory-measured reflectance spectra, and comparing it with those extracted from the pixels in the image. The CDAF technique has been executed on the AVIRIS image where the results show an overall accuracy of better than 96%.

  16. Early Validation of Sentinel-2 L2A Processor and Products

    NASA Astrophysics Data System (ADS)

    Pflug, Bringfried; Main-Knorn, Magdalena; Bieniarz, Jakub; Debaecker, Vincent; Louis, Jerome

    2016-08-01

    Sentinel-2 is a constellation of two polar orbiting satellite units each one equipped with an optical imaging sensor MSI (Multi-Spectral Instrument). Sentinel-2A was launched on June 23, 2015 and Sentinel-2B will follow in 2017.The Level-2A (L2A) processor Sen2Cor implemented for Sentinel-2 data provides a scene classification image, aerosol optical thickness (AOT) and water vapour (WV) maps and the Bottom-Of-Atmosphere (BOA) corrected reflectance product. First validation results of Sen2Cor scene classification showed an overall accuracy of 81%. AOT at 550 nm is estimated by Sen2Cor with uncertainty of 0.035 for cloudless images and locations with dense dark vegetation (DDV) pixels present in the image. Aerosol estimation fails if the image contains no DDV-pixels. Mean difference between Sen2Cor WV and ground-truth is 0.29 cm. Uncertainty of up to 0.04 was found for the BOA- reflectance product.

  17. Combined Raman and autofluorescence ex vivo diagnostics of skin cancer in near-infrared and visible regions

    NASA Astrophysics Data System (ADS)

    Bratchenko, Ivan A.; Artemyev, Dmitry N.; Myakinin, Oleg O.; Khristoforova, Yulia A.; Moryatov, Alexander A.; Kozlov, Sergey V.; Zakharov, Valery P.

    2017-02-01

    The differentiation of skin melanomas and basal cell carcinomas (BCCs) was demonstrated based on combined analysis of Raman and autofluorescence spectra stimulated by visible and NIR lasers. It was ex vivo tested on 39 melanomas and 40 BCCs. Six spectroscopic criteria utilizing information about alteration of melanin, porphyrins, flavins, lipids, and collagen content in tumor with a comparison to healthy skin were proposed. The measured correlation between the proposed criteria makes it possible to define weakly correlated criteria groups for discriminant analysis and principal components analysis application. It was shown that the accuracy of cancerous tissues classification reaches 97.3% for a combined 6-criteria multimodal algorithm, while the accuracy determined separately for each modality does not exceed 79%. The combined 6-D method is a rapid and reliable tool for malignant skin detection and classification.

  18. Thematic accuracy of the 1992 National Land-Cover Data for the eastern United States: Statistical methodology and regional results

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Smith, J.H.; Yang, L.

    2003-01-01

    The accuracy of the 1992 National Land-Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or alternate reference label determined for a sample pixel and a mode class of the mapped 3×3 block of pixels centered on the sample pixel. Results are reported for each of the four regions comprising the eastern United States for both Anderson Level I and II classifications. Overall accuracies for Levels I and II are 80% and 46% for New England, 82% and 62% for New York/New Jersey (NY/NJ), 70% and 43% for the Mid-Atlantic, and 83% and 66% for the Southeast.

  19. Application of Skylab EREP data for land use management

    NASA Technical Reports Server (NTRS)

    Simonett, D. S. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. The 1.09-1.19 micron band proved to be very valuable for discriminating a variety of land use categories, including agriculture, forest, and urban classes. The 1.55-1.75 micron band proved very useful in combination with the 1.09-1.19 micron band. Misregistration between spectral bands, even by as little as 1/2 pixel, may degrade classification accuracy. Identification accuracy of boundary or border pixels was as much as 13% lower than the accuracy for identifying internal field pixels. The principal conclusion with respect to the S190B camera system is that the higher resolution of the S190B system in comparison to previous space photography (Gemini, Apollo), to the S190A system (Skylab), and to LANDSAT imagery significantly increases the range of additional discrimination achievable.

  20. A custom hardware classifier for bruised apple detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Cárdenas, Javier; Figueroa, Miguel; Pezoa, Jorge E.

    2015-09-01

    We present a custom digital architecture for bruised apple classification using hyperspectral images in the near infrared (NIR) spectrum. The algorithm classifies each pixel in an image into one of three classes: bruised, non-bruised, and background. We extract two 5-element feature vectors for each pixel using only 10 out of the 236 spectral bands provided by the hyperspectral camera, thereby greatly reducing both the requirements of the imager and the computational complexity of the algorithm. We then use two linear-kernel support vector machine (SVM) to classify each pixel. Each SVM was trained with 504 windows of size 17×17-pixel taken from 14 hyperspectral images of 320×320 pixels each, for each class. The architecture then computes the percentage of bruised pixels in each apple in order to adequately classify the fruit. We implemented the architecture on a Xilinx Zynq Z-7010 field-programmable gate array (FPGA) and tested it on images from a NIR N17E push-broom camera with a frame rate of 25 fps, a band-pixel rate of 1.888 MHz, and 236 spectral bands between 900 and 1700 nanometers in laboratory conditions. Using 28-bit fixed-point arithmetic, the circuit accurately discriminates 95.2% of the pixels corresponding to an apple, 81% of the pixels corresponding to a bruised apple, and 96.4% of the background. With the default threshold settings, the highest false positive (FP) for a bruised apple is 18.7%. The circuit operates at the native frame rate of the camera, consumes 67 mW of dynamic power, and uses less than 10% of the logic resources on the FPGA.

  1. A systematic review of automated melanoma detection in dermatoscopic images and its ground truth data

    NASA Astrophysics Data System (ADS)

    Ali, Abder-Rahman A.; Deserno, Thomas M.

    2012-02-01

    Malignant melanoma is the third most frequent type of skin cancer and one of the most malignant tumors, accounting for 79% of skin cancer deaths. Melanoma is highly curable if diagnosed early and treated properly as survival rate varies between 15% and 65% from early to terminal stages, respectively. So far, melanoma diagnosis is depending subjectively on the dermatologist's expertise. Computer-aided diagnosis (CAD) systems based on epiluminescense light microscopy can provide an objective second opinion on pigmented skin lesions (PSL). This work systematically analyzes the evidence of the effectiveness of automated melanoma detection in images from a dermatoscopic device. Automated CAD applications were analyzed to estimate their diagnostic outcome. Searching online databases for publication dates between 1985 and 2011, a total of 182 studies on dermatoscopic CAD were found. With respect to the systematic selection criterions, 9 studies were included, published between 2002 and 2011. Those studies formed databases of 14,421 dermatoscopic images including both malignant "melanoma" and benign "nevus", with 8,110 images being available ranging in resolution from 150 x 150 to 1568 x 1045 pixels. Maximum and minimum of sensitivity and specificity are 100.0% and 80.0% as well as 98.14% and 61.6%, respectively. Area under the receiver operator characteristics (AUC) and pooled sensitivity, specificity and diagnostics odds ratio are respectively 0.87, 0.90, 0.81, and 15.89. So, although that automated melanoma detection showed good accuracy in terms of sensitivity, specificity, and AUC, but diagnostic performance in terms of DOR was found to be poor. This might be due to the lack of dermatoscopic image resources (ground truth) that are needed for comprehensive assessment of diagnostic performance. In future work, we aim at testing this hypothesis by joining dermatoscopic images into a unified database that serves as a standard reference for dermatology related research in PSL classification.

  2. Prediction of Skin Sensitization Potency Using Machine Learning Approaches

    EPA Science Inventory

    Replacing animal tests currently used for regulatory hazard classification of skin sensitizers is one of ICCVAM’s top priorities. Accordingly, U.S. federal agency scientists are developing and evaluating computational approaches to classify substances as sensitizers or nons...

  3. Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery

    PubMed Central

    Thanh Noi, Phan; Kappas, Martin

    2017-01-01

    In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km2 within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets. PMID:29271909

  4. Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery.

    PubMed

    Thanh Noi, Phan; Kappas, Martin

    2017-12-22

    In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km² within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets.

  5. a Region-Based Multi-Scale Approach for Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.

    2016-06-01

    Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  6. Classification of Active Microwave and Passive Optical Data Based on Bayesian Theory and Mrf

    NASA Astrophysics Data System (ADS)

    Yu, F.; Li, H. T.; Han, Y. S.; Gu, H. Y.

    2012-08-01

    A classifier based on Bayesian theory and Markov random field (MRF) is presented to classify the active microwave and passive optical remote sensing data, which have demonstrated their respective advantages in inversion of surface soil moisture content. In the method, the VV, VH polarization of ASAR and all the 7 TM bands are taken as the input of the classifier to get the class labels of each pixel of the images. And the model is validated for the necessities of integration of TM and ASAR, it shows that, the total precision of classification in this paper is 89.4%. Comparing with the classification with single TM, the accuracy increase 11.5%, illustrating that synthesis of active and passive optical remote sensing data is efficient and potential in classification.

  7. Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images

    NASA Astrophysics Data System (ADS)

    Miri, Mohammad Saleh; Lee, Kyungmoo; Niemeijer, Meindert; Abràmoff, Michael D.; Kwon, Young H.; Garvin, Mona K.

    2013-03-01

    Glaucoma is one of the major causes of blindness worldwide. One important structural parameter for the diagnosis and management of glaucoma is the cup-to-disc ratio (CDR), which tends to become larger as glaucoma progresses. While approaches exist for segmenting the optic disc and cup within fundus photographs, and more recently, within spectral-domain optical coherence tomography (SD-OCT) volumes, no approaches have been reported for the simultaneous segmentation of these structures within both modalities combined. In this work, a multimodal pixel-classification approach for the segmentation of the optic disc and cup within fundus photographs and SD-OCT volumes is presented. In particular, after segmentation of other important structures (such as the retinal layers and retinal blood vessels) and fundus-to-SD-OCT image registration, features are extracted from both modalities and a k-nearest-neighbor classification approach is used to classify each pixel as cup, rim, or background. The approach is evaluated on 70 multimodal image pairs from 35 subjects in a leave-10%-out fashion (by subject). A significant improvement in classification accuracy is obtained using the multimodal approach over that obtained from the corresponding unimodal approach (97.8% versus 95.2%; p < 0:05; paired t-test).

  8. Multi-resolution analysis using integrated microscopic configuration with local patterns for benign-malignant mass classification

    NASA Astrophysics Data System (ADS)

    Rabidas, Rinku; Midya, Abhishek; Chakraborty, Jayasree; Sadhu, Anup; Arif, Wasim

    2018-02-01

    In this paper, Curvelet based local attributes, Curvelet-Local configuration pattern (C-LCP), is introduced for the characterization of mammographic masses as benign or malignant. Amid different anomalies such as micro- calcification, bilateral asymmetry, architectural distortion, and masses, the reason for targeting the mass lesions is due to their variation in shape, size, and margin which makes the diagnosis a challenging task. Being efficient in classification, multi-resolution property of the Curvelet transform is exploited and local information is extracted from the coefficients of each subband using Local configuration pattern (LCP). The microscopic measures in concatenation with the local textural information provide more discriminating capability than individual. The measures embody the magnitude information along with the pixel-wise relationships among the neighboring pixels. The performance analysis is conducted with 200 mammograms of the DDSM database containing 100 mass cases of each benign and malignant. The optimal set of features is acquired via stepwise logistic regression method and the classification is carried out with Fisher linear discriminant analysis. The best area under the receiver operating characteristic curve and accuracy of 0.95 and 87.55% are achieved with the proposed method, which is further compared with some of the state-of-the-art competing methods.

  9. Unsupervised classification of scattering behavior using radar polarimetry data

    NASA Technical Reports Server (NTRS)

    Van Zyl, Jakob J.

    1989-01-01

    The use of an imaging radar polarimeter data for unsupervised classification of scattering behavior is described by comparing the polarization properties of each pixel in a image to that of simple classes of scattering such as even number of reflections, odd number of reflections, and diffuse scattering. For example, when this algorithm is applied to data acquired over the San Francisco Bay area in California, it classifies scattering by the ocean as being similar to that predicted by the class of odd number of reflections, scattering by the urban area as being similar to that predicted by the class of even number of reflections, and scattering by the Golden Gate Park as being similar to that predicted by the diffuse scattering class. It also classifies the scattering by a lighthouse in the ocean and boats on the ocean surface as being similar to that predicted by the even number of reflections class, making it easy to identify these objects against the background of the surrounding ocean. The algorithm is also applied to forested areas and shows that scattering from clear-cut areas and agricultural fields is mostly similar to that predicted by the odd number of reflections class, while the scattering from tree-covered areas generally is classified as being a mixture of pixels exhibiting the characteristics of all three classes, although each pixel is identified with only a single class.

  10. Mangrove classification through the use of object oriented classification and support vector machine of lidar datasets: a case study in Naawan and Manticao, Misamis Oriental, Philippines

    NASA Astrophysics Data System (ADS)

    Jalbuena, Rey L.; Peralta, Rudolph V.; Tamondong, Ayin M.

    2016-10-01

    Mangroves are trees or shrubs that grows at the surface between the land and the sea in tropical and sub-tropical latitudes. Mangroves are essential in supporting various marine life, thus, it is important to preserve and manage these areas. There are many approaches in creating Mangroves maps, one of which is through the use of Light Detection and Ranging (LiDAR). It is a remote sensing technique which uses light pulses to measure distances and to generate three-dimensional point clouds of the Earth's surface. In this study, the topographic LiDAR Data will be used to analyze the geophysical features of the terrain and create a Mangrove map. The dataset that we have were first pre-processed using the LAStools software. It is a software that is used to process LiDAR data sets and create different layers such as DSM, DTM, nDSM, Slope, LiDAR Intensity, LiDAR number of first returns, and CHM. All the aforementioned layers together was used to derive the Mangrove class. Then, an Object-based Image Analysis (OBIA) was performed using eCognition. OBIA analyzes a group of pixels with similar properties called objects, as compared to the traditional pixel-based which only examines a single pixel. Multi-threshold and multiresolution segmentation were used to delineate the different classes and split the image into objects. There are four levels of classification, first is the separation of the Land from the Water. Then the Land class was further dived into Ground and Non-ground objects. Furthermore classification of Nonvegetation, Mangroves, and Other Vegetation was done from the Non-ground objects. Lastly Separation of the mangrove class was done through the Use of field verified training points which was then run into a Support Vector Machine (SVM) classification. Different classes were separated using the different layer feature properties, such as mean, mode, standard deviation, geometrical properties, neighbor-related properties, and textural properties. Accuracy assessment was done using a different set of field validation points. This workflow was applied in the classification of Mangroves to a LiDAR dataset of Naawan and Manticao, Misamis Oriental, Philippines. The process presented in this study shows that LiDAR data and its derivatives can be used in extracting and creating Mangrove maps, which can be helpful in managing coastal environment.

  11. Non-Euclidean phasor analysis for quantification of oxidative stress in ex vivo human skin exposed to sun filters using fluorescence lifetime imaging microscopy.

    PubMed

    Osseiran, Sam; Roider, Elisabeth M; Wang, Hequn; Suita, Yusuke; Murphy, Michael; Fisher, David E; Evans, Conor L

    2017-12-01

    Chemical sun filters are commonly used as active ingredients in sunscreens due to their efficient absorption of ultraviolet (UV) radiation. Yet, it is known that these compounds can photochemically react with UV light and generate reactive oxygen species and oxidative stress in vitro, though this has yet to be validated in vivo. One label-free approach to probe oxidative stress is to measure and compare the relative endogenous fluorescence generated by cellular coenzymes nicotinamide adenine dinucleotides and flavin adenine dinucleotides. However, chemical sun filters are fluorescent, with emissive properties that contaminate endogenous fluorescent signals. To accurately distinguish the source of fluorescence in ex vivo skin samples treated with chemical sun filters, fluorescence lifetime imaging microscopy data were processed on a pixel-by-pixel basis using a non-Euclidean separation algorithm based on Mahalanobis distance and validated on simulated data. Applying this method, ex vivo samples exhibited a small oxidative shift when exposed to sun filters alone, though this shift was much smaller than that imparted by UV irradiation. Given the need for investigative tools to further study the clinical impact of chemical sun filters in patients, the reported methodology may be applied to visualize chemical sun filters and measure oxidative stress in patients' skin. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  12. Implications of sensor design for coral reef detection: Upscaling ground hyperspectral imagery in spatial and spectral scales

    NASA Astrophysics Data System (ADS)

    Caras, Tamir; Hedley, John; Karnieli, Arnon

    2017-12-01

    Remote sensing offers a potential tool for large scale environmental surveying and monitoring. However, remote observations of coral reefs are difficult especially due to the spatial and spectral complexity of the target compared to sensor specifications as well as the environmental implications of the water medium above. The development of sensors is driven by technological advances and the desired products. Currently, spaceborne systems are technologically limited to a choice between high spectral resolution and high spatial resolution, but not both. The current study explores the dilemma of whether future sensor design for marine monitoring should prioritise on improving their spatial or spectral resolution. To address this question, a spatially and spectrally resampled ground-level hyperspectral image was used to test two classification elements: (1) how the tradeoff between spatial and spectral resolutions affects classification; and (2) how a noise reduction by majority filter might improve classification accuracy. The studied reef, in the Gulf of Aqaba (Eilat), Israel, is heterogeneous and complex so the local substrate patches are generally finer than currently available imagery. Therefore, the tested spatial resolution was broadly divided into four scale categories from five millimeters to one meter. Spectral resolution resampling aimed to mimic currently available and forthcoming spaceborne sensors such as (1) Environmental Mapping and Analysis Program (EnMAP) that is characterized by 25 bands of 6.5 nm width; (2) VENμS with 12 narrow bands; and (3) the WorldView series with broadband multispectral resolution. Results suggest that spatial resolution should generally be prioritized for coral reef classification because the finer spatial scale tested (pixel size < 0.1 m) may compensate for some low spectral resolution drawbacks. In this regard, it is shown that the post-classification majority filtering substantially improves the accuracy of all pixel sizes up to the point where the kernel size reaches the average unit size (pixel < 0.25 m). However, careful investigation as to the effect of band distribution and choice could improve the sensor suitability for the marine environment task. This in mind, while the focus in this study was on the technologically limited spaceborne design, aerial sensors may presently provide an opportunity to implement the suggested setup.

  13. Corn and soybean Landsat MSS classification performance as a function of scene characteristics

    NASA Technical Reports Server (NTRS)

    Batista, G. T.; Hixson, M. M.; Bauer, M. E.

    1982-01-01

    In order to fully utilize remote sensing to inventory crop production, it is important to identify the factors that affect the accuracy of Landsat classifications. The objective of this study was to investigate the effect of scene characteristics involving crop, soil, and weather variables on the accuracy of Landsat classifications of corn and soybeans. Segments sampling the U.S. Corn Belt were classified using a Gaussian maximum likelihood classifier on multitemporally registered data from two key acquisition periods. Field size had a strong effect on classification accuracy with small fields tending to have low accuracies even when the effect of mixed pixels was eliminated. Other scene characteristics accounting for variability in classification accuracy included proportions of corn and soybeans, crop diversity index, proportion of all field crops, soil drainage, slope, soil order, long-term average soybean yield, maximum yield, relative position of the segment in the Corn Belt, weather, and crop development stage.

  14. Site-Specific Differentiation of Fibroblasts in Normal and Scleroderma Skin

    DTIC Science & Technology

    2010-06-01

    SITE-SPECIFIC DIFFERENTIATION OF FIBROBLASTS IN NORMAL AND SCLERODERMA SKIN PRINCIPAL INVESTIGATOR: Howard Y. Chang, M.D., Ph.D...2010 4. TITLE AND SUBTITLE Site-Specific Differentiation of Fibroblasts in Normal and 5a. CONTRACT NUMBER Scleroderma Skin 5b. GRANT NUMBER...activated fibroblasts from SSc. 15. SUBJECT TERMS Scleroderma , fibroblasts, gene expression 16. SECURITY CLASSIFICATION OF: U 17. LIMITATION OF

  15. Effects of Digitization and JPEG Compression on Land Cover Classification Using Astronaut-Acquired Orbital Photographs

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Webb, Edward L.; Evangelista, Arlene

    2000-01-01

    Studies that utilize astronaut-acquired orbital photographs for visual or digital classification require high-quality data to ensure accuracy. The majority of images available must be digitized from film and electronically transferred to scientific users. This study examined the effect of scanning spatial resolution (1200, 2400 pixels per inch [21.2 and 10.6 microns/pixel]), scanning density range option (Auto, Full) and compression ratio (non-lossy [TIFF], and lossy JPEG 10:1, 46:1, 83:1) on digital classification results of an orbital photograph from the NASA - Johnson Space Center archive. Qualitative results suggested that 1200 ppi was acceptable for visual interpretive uses for major land cover types. Moreover, Auto scanning density range was superior to Full density range. Quantitative assessment of the processing steps indicated that, while 2400 ppi scanning spatial resolution resulted in more classified polygons as well as a substantially greater proportion of polygons < 0.2 ha, overall agreement between 1200 ppi and 2400 ppi was quite high. JPEG compression up to approximately 46:1 also did not appear to have a major impact on quantitative classification characteristics. We conclude that both 1200 and 2400 ppi scanning resolutions are acceptable options for this level of land cover classification, as well as a compression ratio at or below approximately 46:1. Auto range density should always be used during scanning because it acquires more of the information from the film. The particular combination of scanning spatial resolution and compression level will require a case-by-case decision and will depend upon memory capabilities, analytical objectives and the spatial properties of the objects in the image.

  16. Detection of Aspens Using High Resolution Aerial Laser Scanning Data and Digital Aerial Images

    PubMed Central

    Säynäjoki, Raita; Packalén, Petteri; Maltamo, Matti; Vehmas, Mikko; Eerikäinen, Kalle

    2008-01-01

    The aim was to use high resolution Aerial Laser Scanning (ALS) data and aerial images to detect European aspen (Populus tremula L.) from among other deciduous trees. The field data consisted of 14 sample plots of 30 m × 30 m size located in the Koli National Park in the North Karelia, Eastern Finland. A Canopy Height Model (CHM) was interpolated from the ALS data with a pulse density of 3.86/m2, low-pass filtered using Height-Based Filtering (HBF) and binarized to create the mask needed to separate the ground pixels from the canopy pixels within individual areas. Watershed segmentation was applied to the low-pass filtered CHM in order to create preliminary canopy segments, from which the non-canopy elements were extracted to obtain the final canopy segmentation, i.e. the ground mask was analysed against the canopy mask. A manual classification of aerial images was employed to separate the canopy segments of deciduous trees from those of coniferous trees. Finally, linear discriminant analysis was applied to the correctly classified canopy segments of deciduous trees to classify them into segments belonging to aspen and those belonging to other deciduous trees. The independent variables used in the classification were obtained from the first pulse ALS point data. The accuracy of discrimination between aspen and other deciduous trees was 78.6%. The independent variables in the classification function were the proportion of vegetation hits, the standard deviation of in pulse heights, accumulated intensity at the 90th percentile and the proportion of laser points reflected at the 60th height percentile. The accuracy of classification corresponded to the validation results of earlier ALS-based studies on the classification of individual deciduous trees to tree species. PMID:27873799

  17. Skin Color and Pigmentation in Ethnic Skin.

    PubMed

    Visscher, Marty O

    2017-02-01

    Skin coloration is highly diverse, partly due to the presence of pigmentation. Color variation is related to the extent of ultraviolet radiation exposure, as well as other factors. Inherent skin coloration arises from differences in basal epidermal melanin amount and type. Skin color is influenced by both the quantity and distribution of melanocytes. The effectiveness of inherent pigmentation for protecting living cells also varies. This article discusses skin color, pigmentation, and ethnicity in relation to clinical practice. Color perception, skin typing/classification, and quantitation of pigmentation are reviewed in relation to ethnicity, environmental stresses/irritants, and potential treatment effects. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Object-Based Classification and Change Detection of Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Park, J. G.; Harada, I.; Kwak, Y.

    2016-06-01

    Topography and geology are factors to characterize the distribution of natural vegetation. Topographic contour is particularly influential on the living conditions of plants such as soil moisture, sunlight, and windiness. Vegetation associations having similar characteristics are present in locations having similar topographic conditions unless natural disturbances such as landslides and forest fires or artificial disturbances such as deforestation and man-made plantation bring about changes in such conditions. We developed a vegetation map of Japan using an object-based segmentation approach with topographic information (elevation, slope, slope direction) that is closely related to the distribution of vegetation. The results found that the object-based classification is more effective to produce a vegetation map than the pixel-based classification.

  19. The effect of spatial, spectral and radiometric factors on classification accuracy using thematic mapper data

    NASA Technical Reports Server (NTRS)

    Wrigley, R. C.; Acevedo, W.; Alexander, D.; Buis, J.; Card, D.

    1984-01-01

    An experiment of a factorial design was conducted to test the effects on classification accuracy of land cover types due to the improved spatial, spectral and radiometric characteristics of the Thematic Mapper (TM) in comparison to the Multispectral Scanner (MSS). High altitude aircraft scanner data from the Airborne Thematic Mapper instrument was acquired over central California in August, 1983 and used to simulate Thematic Mapper data as well as all combinations of the three characteristics for eight data sets in all. Results for the training sites (field center pixels) showed better classification accuracies for MSS spatial resolution, TM spectral bands and TM radiometry in order of importance.

  20. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  1. Lossless Compression of Classification-Map Data

    NASA Technical Reports Server (NTRS)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  2. Classification of weld defect based on information fusion technology for radiographic testing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defectmore » feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.« less

  3. Classification of weld defect based on information fusion technology for radiographic testing system.

    PubMed

    Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying

    2016-03-01

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.

  4. Evaluation of the performance of the reduced local lymph node assay for skin sensitization testing.

    PubMed

    Ezendam, Janine; Muller, Andre; Hakkert, Betty C; van Loveren, Henk

    2013-06-01

    The local lymph node assay (LLNA) is the preferred method for classification of sensitizers within REACH. To reduce the number of mice for the identification of sensitizers the reduced LLNA was proposed, which uses only the high dose group of the LLNA. To evaluate the performance of this method for classification, LLNA data from REACH registrations were used and classification based on all dose groups was compared to classification based on the high dose group. We confirmed previous examinations of the reduced LLNA showing that this method is less sensitive compared to the LLNA. The reduced LLNA misclassified 3.3% of the sensitizers identified in the LLNA and misclassification occurred in all potency classes and that there was no clear association with irritant properties. It is therefore not possible to predict beforehand which substances might be misclassified. Another limitation of the reduced LLNA is that skin sensitizing potency cannot be assessed. For these reasons, it is not recommended to use the reduced LLNA as a stand-alone assay for skin sensitization testing within REACH. In the future, the reduced LLNA might be of added value in a weight of evidence approach to confirm negative results obtained with non-animal approaches. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Self-estimation or phototest measurement of skin UV sensitivity and its association with people's attitudes towards sun exposure.

    PubMed

    Falk, Magnus

    2014-02-01

    Fitzpatrick's classification is the most common way of assessing skin UV sensitivity. The study aim was to investigate how self-estimated and actual UV sensitivity, as measured by phototest, are associated with attitudes towards sunbathing and the propensity to increase sun protection, as well as the correlation between self-estimated and actual UV sensitivity. A total of 166 primary healthcare patients filled-out a questionnaire investigating attitudes towards sunbathing and the propensity to increase sun protection. They reported their skin type according to Fitzpatrick, and a UV sensitivity phototest was performed. Self-rated low UV sensitivity (skin type III-VI) was associated with a more positive attitude towards sunbathing and a lower propensity to increase sun protection, compared to high UV sensitivity. The correlation between the two methods was weak. The findings might indicate that individuals with a perceived low but in reality high UV sensitivity do not seek adequate sun protection with regard to skin cancer risk. Furthermore, the poor correlation between self-reported and actual UV sensitivity, measured by phototest, makes the clinical use of Fitzpatrick's classification questionable.

  6. The development of the friction coefficient inspection equipment for skin using a load cell.

    PubMed

    Song, Han Wook; Park, Yon Kyu; Lee, Sung Jun; Woo, Sam Yong; Kim, Sun Hyung; Kim, Dal Rae

    2008-01-01

    The skin is an indispensible organ for human because it contributes to the metabolism using its own biochemical functions as well as it protects the human body from the exterior stimuli. Recently, the friction coefficient have been used as the decision index of the progress for the bacterial aliments in the field of the skin physiology and the importance of friction coefficient have been increased in the skin care market because of the needs of the well being times. In addition, the usage of friction coefficient is known to have the big discrimination ability in classification of human constitutions, which is utilized in the alternative medicine. In this study, we designed a system which used the multi axes load cell and hemi-circular probe and tried to measure the friction coefficient of hand skins repeatedly. Using this system, the relative repeatability error for the measurement of the friction coefficient was below 4 %. The coefficient is not concerned in curvatures of tips. Using this system, we will try to establish the standard for classification of constitutions.

  7. Ocean Thermal Feature Recognition, Discrimination and Tracking Using Infrared Satellite Imagery

    DTIC Science & Technology

    1991-06-01

    rejected if the temperature in the mapped area exceeds classification criteria ............................... 17 viii 2.6 Ideal feature space mapping from...in seconds, and 1P is the side dimension of the pixel in meters. Figure 2.6: Ideal feature space mapping from pattern tile - search tile comparison. 20

  8. A HYBRID HIGH RESOLUTION IMAGE CLASSIFICATION METHOD FOR MAPPING EELGRASS DISTRIBUTIONS IN YAQUINA BAY ESTUARY, OREGON

    EPA Science Inventory

    False-color infrared aerial photography of the Yaquina Bay Estuary, Oregon was acquired at extreme low tides and digitally orthorectified with a ground pixel resolution of 20 cm to provide data for intertidal vegetation mapping. Submerged, semi-exposed and exposed eelgrass mead...

  9. Evaluation of a CdTe semiconductor based compact γ camera for sentinel lymph node imaging.

    PubMed

    Russo, Paolo; Curion, Assunta S; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caracò, Corradina; Aloj, Luigi; Lastoria, Secondo

    2011-03-01

    The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. The room-temperature CdTe pixel detector (1 mm thick) has 256 x 256 square pixels arranged with a 55 microm pitch (sensitive area 14.08 x 14.08 mm2), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. For 99 mTc, at 50 mm distance, a background-subtracted sensitivity of 6.5 x 10(-3) cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3 x 10(-2) cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq 99 mTc and prior localization with standard gamma camera lymphoscintigraphy. The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter distances from the patient skin.

  10. High sensitivity optical measurement of skin gloss

    PubMed Central

    Ezerskaia, Anna; Ras, Arno; Bloemen, Pascal; Pereira, Silvania F.; Urbach, H. Paul; Varghese, Babu

    2017-01-01

    We demonstrate a low-cost optical method for measuring the gloss properties with improved sensitivity in the low gloss regime, relevant for skin gloss properties. The gloss estimation method is based on, on the one hand, the slope of the intensity gradient in the transition regime between specular and diffuse reflection and on the other on the sum over the intensities of pixels above threshold, derived from a camera image obtained using unpolarized white light illumination. We demonstrate the improved sensitivity of the two proposed methods using Monte Carlo simulations and experiments performed on ISO gloss calibration standards with an optical prototype. The performance and linearity of the method was compared with different professional gloss measurement devices based on the ratio of specular to diffuse intensity. We demonstrate the feasibility for in-vivo skin gloss measurements by quantifying the temporal evolution of skin gloss after application of standard paraffin cream bases on skin. The presented method opens new possibilities in the fields of cosmetology and dermatopharmacology for measuring the skin gloss and resorption kinetics and the pharmacodynamics of various external agents. PMID:29026683

  11. User-interactive electronic skin for instantaneous pressure visualization

    NASA Astrophysics Data System (ADS)

    Wang, Chuan; Hwang, David; Yu, Zhibin; Takei, Kuniharu; Park, Junwoo; Chen, Teresa; Ma, Biwu; Javey, Ali

    2013-10-01

    Electronic skin (e-skin) presents a network of mechanically flexible sensors that can conformally wrap irregular surfaces and spatially map and quantify various stimuli. Previous works on e-skin have focused on the optimization of pressure sensors interfaced with an electronic readout, whereas user interfaces based on a human-readable output were not explored. Here, we report the first user-interactive e-skin that not only spatially maps the applied pressure but also provides an instantaneous visual response through a built-in active-matrix organic light-emitting diode display with red, green and blue pixels. In this system, organic light-emitting diodes (OLEDs) are turned on locally where the surface is touched, and the intensity of the emitted light quantifies the magnitude of the applied pressure. This work represents a system-on-plastic demonstration where three distinct electronic components—thin-film transistor, pressure sensor and OLED arrays—are monolithically integrated over large areas on a single plastic substrate. The reported e-skin may find a wide range of applications in interactive input/control devices, smart wallpapers, robotics and medical/health monitoring devices.

  12. User-interactive electronic skin for instantaneous pressure visualization.

    PubMed

    Wang, Chuan; Hwang, David; Yu, Zhibin; Takei, Kuniharu; Park, Junwoo; Chen, Teresa; Ma, Biwu; Javey, Ali

    2013-10-01

    Electronic skin (e-skin) presents a network of mechanically flexible sensors that can conformally wrap irregular surfaces and spatially map and quantify various stimuli. Previous works on e-skin have focused on the optimization of pressure sensors interfaced with an electronic readout, whereas user interfaces based on a human-readable output were not explored. Here, we report the first user-interactive e-skin that not only spatially maps the applied pressure but also provides an instantaneous visual response through a built-in active-matrix organic light-emitting diode display with red, green and blue pixels. In this system, organic light-emitting diodes (OLEDs) are turned on locally where the surface is touched, and the intensity of the emitted light quantifies the magnitude of the applied pressure. This work represents a system-on-plastic demonstration where three distinct electronic components--thin-film transistor, pressure sensor and OLED arrays--are monolithically integrated over large areas on a single plastic substrate. The reported e-skin may find a wide range of applications in interactive input/control devices, smart wallpapers, robotics and medical/health monitoring devices.

  13. High sensitivity optical measurement of skin gloss.

    PubMed

    Ezerskaia, Anna; Ras, Arno; Bloemen, Pascal; Pereira, Silvania F; Urbach, H Paul; Varghese, Babu

    2017-09-01

    We demonstrate a low-cost optical method for measuring the gloss properties with improved sensitivity in the low gloss regime, relevant for skin gloss properties. The gloss estimation method is based on, on the one hand, the slope of the intensity gradient in the transition regime between specular and diffuse reflection and on the other on the sum over the intensities of pixels above threshold, derived from a camera image obtained using unpolarized white light illumination. We demonstrate the improved sensitivity of the two proposed methods using Monte Carlo simulations and experiments performed on ISO gloss calibration standards with an optical prototype. The performance and linearity of the method was compared with different professional gloss measurement devices based on the ratio of specular to diffuse intensity. We demonstrate the feasibility for in-vivo skin gloss measurements by quantifying the temporal evolution of skin gloss after application of standard paraffin cream bases on skin. The presented method opens new possibilities in the fields of cosmetology and dermatopharmacology for measuring the skin gloss and resorption kinetics and the pharmacodynamics of various external agents.

  14. A higher order conditional random field model for simultaneous classification of land cover and land use

    NASA Astrophysics Data System (ADS)

    Albert, Lena; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    We propose a new approach for the simultaneous classification of land cover and land use considering spatial as well as semantic context. We apply a Conditional Random Fields (CRF) consisting of a land cover and a land use layer. In the land cover layer of the CRF, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Intra-layer edges of the CRF model spatial dependencies between neighbouring image sites. All spatially overlapping sites in both layers are connected by inter-layer edges, which leads to higher order cliques modelling the semantic relation between all land cover and land use sites in the clique. A generic formulation of the higher order potential is proposed. In order to enable efficient inference in the two-layer higher order CRF, we propose an iterative inference procedure in which the two classification tasks mutually influence each other. We integrate contextual relations between land cover and land use in the classification process by using contextual features describing the complex dependencies of all nodes in a higher order clique. These features are incorporated in a discriminative classifier, which approximates the higher order potentials during the inference procedure. The approach is designed for input data based on aerial images. Experiments are carried out on two test sites to evaluate the performance of the proposed method. The experiments show that the classification results are improved compared to the results of a non-contextual classifier. For land cover classification, the result is much more homogeneous and the delineation of land cover segments is improved. For the land use classification, an improvement is mainly achieved for land use objects showing non-typical characteristics or similarities to other land use classes. Furthermore, we have shown that the size of the super-pixels has an influence on the level of detail of the classification result, but also on the degree of smoothing induced by the segmentation method, which is especially beneficial for land cover classes covering large, homogeneous areas.

  15. Probabilistic multi-resolution human classification

    NASA Astrophysics Data System (ADS)

    Tu, Jun; Ran, H.

    2006-02-01

    Recently there has been some interest in using infrared cameras for human detection because of the sharply decreasing prices of infrared cameras. The training data used in our work for developing the probabilistic template consists images known to contain humans in different poses and orientation but having the same height. Multiresolution templates are performed. They are based on contour and edges. This is done so that the model does not learn the intensity variations among the background pixels and intensity variations among the foreground pixels. Each template at every level is then translated so that the centroid of the non-zero pixels matches the geometrical center of the image. After this normalization step, for each pixel of the template, the probability of it being pedestrian is calculated based on the how frequently it appears as 1 in the training data. We also use periodicity gait to verify the pedestrian in a Bayesian manner for the whole blob in a probabilistic way. The videos had quite a lot of variations in the scenes, sizes of people, amount of occlusions and clutter in the backgrounds as is clearly evident. Preliminary experiments show the robustness.

  16. Parameterization of Shape and Compactness in Object-based Image Classification Using Quickbird-2 Imagery

    NASA Astrophysics Data System (ADS)

    Tonbul, H.; Kavzoglu, T.

    2016-12-01

    In recent years, object based image analysis (OBIA) has spread out and become a widely accepted technique for the analysis of remotely sensed data. OBIA deals with grouping pixels into homogenous objects based on spectral, spatial and textural features of contiguous pixels in an image. The first stage of OBIA, named as image segmentation, is the most prominent part of object recognition. In this study, multiresolution segmentation, which is a region-based approach, was employed to construct image objects. In the application of multi-resolution, three parameters, namely shape, compactness and scale must be set by the analyst. Segmentation quality remarkably influences the fidelity of the thematic maps and accordingly the classification accuracy. Therefore, it is of great importance to search and set optimal values for the segmentation parameters. In the literature, main focus has been on the definition of scale parameter, assuming that the effect of shape and compactness parameters is limited in terms of achieved classification accuracy. The aim of this study is to deeply analyze the influence of shape/compactness parameters by varying their values while using the optimal scale parameter determined by the use of Estimation of Scale Parameter (ESP-2) approach. A pansharpened Qickbird-2 image covering Trabzon, Turkey was employed to investigate the objectives of the study. For this purpose, six different combinations of shape/compactness were utilized to make deductions on the behavior of shape and compactness parameters and optimal setting for all parameters as a whole. Objects were assigned to classes using nearest neighbor classifier in all segmentation observations and equal number of pixels was randomly selected to calculate accuracy metrics. The highest overall accuracy (92.3%) was achieved by setting the shape/compactness criteria to 0.3/0.3. The results of this study indicate that shape/compactness parameters can have significant effect on classification accuracy with 4% change in overall accuracy. Also, statistical significance of differences in accuracy was tested using the McNemar's test and found that the difference between poor and optimal setting of shape/compactness parameters was statistically significant, suggesting a search for optimal parameterization instead of default setting.

  17. Computational Short-cutting the Big Data Classification Bottleneck: Using the MODIS Land Cover Product to Derive a Consistent 30 m Landsat Land Cover Product of the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Roy, D. P.

    2016-12-01

    Classification is a fundamental process in remote sensing used to relate pixel values to land cover classes present on the surface. The state of the practice for large area land cover classification is to classify satellite time series metrics with a supervised (i.e., training data dependent) non-parametric classifier. Classification accuracy generally increases with training set size. However, training data collection is expensive and the optimal training distribution over large areas is unknown. The MODIS 500 m land cover product is available globally on an annual basis and so provides a potentially very large source of land cover training data. A novel methodology to classify large volume Landsat data using high quality training data derived automatically from the MODIS land cover product is demonstrated for all of the Conterminous United States (CONUS). The known misclassification accuracy of the MODIS land cover product and the scale difference between the 500 m MODIS and 30 m Landsat data are accommodated for by a novel MODIS product filtering, Landsat pixel selection, and iterative training approach to balance the proportion of local and CONUS training data used. Three years of global Web-enabled Landsat data (WELD) data for all of the CONUS are classified using a random forest classifier and the results assessed using random forest `out-of-bag' training samples. The global WELD data are corrected to surface nadir BRDF-Adjusted Reflectance and are defined in 158 × 158 km tiles in the same projection and nested to the MODIS land cover products. This reduces the need to pre-process the considerable Landsat data volume (more than 14,000 Landsat 5 and 7 scenes per year over the CONUS covering 11,000 million 30 m pixels). The methodology is implemented in a parallel manner on WELD tile by tile basis but provides a wall-to-wall seamless 30 m land cover product. Detailed tile and CONUS results are presented and the potential for global production using the recently available global WELD products are discussed.

  18. Early breast tumor and late SARS detections using space-variant multispectral infrared imaging at a single pixel

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Buss, James R.; Kopriva, Ivica

    2004-04-01

    We proposed the physics approach to solve a physical inverse problem, namely to choose the unique equilibrium solution (at the minimum free energy: H= E - ToS, including the Wiener, l.m.s E, and ICA, Max S, as special cases). The "unsupervised classification" presumes that required information must be learned and derived directly and solely from the data alone, in consistence with the classical Duda-Hart ATR definition of the "unlabelled data". Such truly unsupervised methodology is presented for space-variant imaging processing for a single pixel in the real world case of remote sensing, early tumor detections and SARS. The indeterminacy of the multiple solutions of the inverse problem is regulated or selected by means of the absolute minimum of isothermal free energy as the ground truth of local equilibrium condition at the single-pixel foot print.

  19. Object-Based Random Forest Classification of Land Cover from Remotely Sensed Imagery for Industrial and Mining Reclamation

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Luo, M.; Xu, L.; Zhou, X.; Ren, J.; Zhou, J.

    2018-04-01

    The RF method based on grid-search parameter optimization could achieve a classification accuracy of 88.16 % in the classification of images with multiple feature variables. This classification accuracy was higher than that of SVM and ANN under the same feature variables. In terms of efficiency, the RF classification method performs better than SVM and ANN, it is more capable of handling multidimensional feature variables. The RF method combined with object-based analysis approach could highlight the classification accuracy further. The multiresolution segmentation approach on the basis of ESP scale parameter optimization was used for obtaining six scales to execute image segmentation, when the segmentation scale was 49, the classification accuracy reached the highest value of 89.58 %. The classification accuracy of object-based RF classification was 1.42 % higher than that of pixel-based classification (88.16 %), and the classification accuracy was further improved. Therefore, the RF classification method combined with object-based analysis approach could achieve relatively high accuracy in the classification and extraction of land use information for industrial and mining reclamation areas. Moreover, the interpretation of remotely sensed imagery using the proposed method could provide technical support and theoretical reference for remotely sensed monitoring land reclamation.

  20. Classifying environmentally significant urban land uses with satellite imagery.

    PubMed

    Park, Mi-Hyun; Stenstrom, Michael K

    2008-01-01

    We investigated Bayesian networks to classify urban land use from satellite imagery. Landsat Enhanced Thematic Mapper Plus (ETM(+)) images were used for the classification in two study areas: (1) Marina del Rey and its vicinity in the Santa Monica Bay Watershed, CA and (2) drainage basins adjacent to the Sweetwater Reservoir in San Diego, CA. Bayesian networks provided 80-95% classification accuracy for urban land use using four different classification systems. The classifications were robust with small training data sets with normal and reduced radiometric resolution. The networks needed only 5% of the total data (i.e., 1500 pixels) for sample size and only 5- or 6-bit information for accurate classification. The network explicitly showed the relationship among variables from its structure and was also capable of utilizing information from non-spectral data. The classification can be used to provide timely and inexpensive land use information over large areas for environmental purposes such as estimating stormwater pollutant loads.

  1. Skin Parameter Map Retrieval from a Dedicated Multispectral Imaging System Applied to Dermatology/Cosmetology

    PubMed Central

    2013-01-01

    In vivo quantitative assessment of skin lesions is an important step in the evaluation of skin condition. An objective measurement device can help as a valuable tool for skin analysis. We propose an explorative new multispectral camera specifically developed for dermatology/cosmetology applications. The multispectral imaging system provides images of skin reflectance at different wavebands covering visible and near-infrared domain. It is coupled with a neural network-based algorithm for the reconstruction of reflectance cube of cutaneous data. This cube contains only skin optical reflectance spectrum in each pixel of the bidimensional spatial information. The reflectance cube is analyzed by an algorithm based on a Kubelka-Munk model combined with evolutionary algorithm. The technique allows quantitative measure of cutaneous tissue and retrieves five skin parameter maps: melanin concentration, epidermis/dermis thickness, haemoglobin concentration, and the oxygenated hemoglobin. The results retrieved on healthy participants by the algorithm are in good accordance with the data from the literature. The usefulness of the developed technique was proved during two experiments: a clinical study based on vitiligo and melasma skin lesions and a skin oxygenation experiment (induced ischemia) with healthy participant where normal tissues are recorded at normal state and when temporary ischemia is induced. PMID:24159326

  2. Prediction of Chemical Respiratory Sensitizers Using GARD, a Novel In Vitro Assay Based on a Genomic Biomarker Signature

    PubMed Central

    Albrekt, Ann-Sofie; Borrebaeck, Carl A. K.; Lindstedt, Malin

    2015-01-01

    Background Repeated exposure to certain low molecular weight (LMW) chemical compounds may result in development of allergic reactions in the skin or in the respiratory tract. In most cases, a certain LMW compound selectively sensitize the skin, giving rise to allergic contact dermatitis (ACD), or the respiratory tract, giving rise to occupational asthma (OA). To limit occurrence of allergic diseases, efforts are currently being made to develop predictive assays that accurately identify chemicals capable of inducing such reactions. However, while a few promising methods for prediction of skin sensitization have been described, to date no validated method, in vitro or in vivo, exists that is able to accurately classify chemicals as respiratory sensitizers. Results Recently, we presented the in vitro based Genomic Allergen Rapid Detection (GARD) assay as a novel testing strategy for classification of skin sensitizing chemicals based on measurement of a genomic biomarker signature. We have expanded the applicability domain of the GARD assay to classify also respiratory sensitizers by identifying a separate biomarker signature containing 389 differentially regulated genes for respiratory sensitizers in comparison to non-respiratory sensitizers. By using an independent data set in combination with supervised machine learning, we validated the assay, showing that the identified genomic biomarker is able to accurately classify respiratory sensitizers. Conclusions We have identified a genomic biomarker signature for classification of respiratory sensitizers. Combining this newly identified biomarker signature with our previously identified biomarker signature for classification of skin sensitizers, we have developed a novel in vitro testing strategy with a potent ability to predict both skin and respiratory sensitization in the same sample. PMID:25760038

  3. The Hand Eczema Trial (HET): Design of a randomised clinical trial of the effect of classification and individual counselling versus no intervention among health-care workers with hand eczema.

    PubMed

    Ibler, Kristina Sophie; Agner, Tove; Hansen, Jane Lindschou; Gluud, Christian

    2010-08-31

    Hand eczema is the most frequently recognized occupational disease in Denmark with an incidence of approximately 0.32 per 1000 person-years. Consequences of hand eczema include chronic severe eczema, prolonged sick leave, unemployment, and impaired quality of life. New preventive strategies are needed to reduce occupational hand eczema. We describe the design of a randomised clinical trial to investigate the effects of classification of hand eczema plus individual counselling versus no intervention. The trial includes health-care workers with hand eczema identified from a self-administered questionnaire delivered to 3181 health-care workers in three Danish hospitals. The questionnaire identifies the prevalence of hand eczema, knowledge of skin-protection, and exposures that can lead to hand eczema. At entry, all participants are assessed regarding: disease severity (Hand Eczema Severity Index); self-evaluated disease severity; number of eruptions; quality of life; skin protective behaviour, and knowledge of skin protection. The patients are centrally randomised to intervention versus no intervention 1:1 stratified for hospital, profession, and severity score. The experimental group undergoes patch and prick testing; classification of the hand eczema; demonstration of hand washing and appliance of emollients; individual counselling, and a skin-care programme. The control group receives no intervention. All participants are reassessed after six months. The primary outcome is observer-blinded assessment of disease severity and the secondary outcomes are unblinded assessments of disease severity; number of eruptions; knowledge of skin protection; skin-protective behaviour, and quality of life. The trial is registered in ClinicalTrials.Gov, NCT01012453.

  4. Polarimetry based partial least square classification of ex vivo healthy and basal cell carcinoma human skin tissues.

    PubMed

    Ahmad, Iftikhar; Ahmad, Manzoor; Khan, Karim; Ikram, Masroor

    2016-06-01

    Optical polarimetry was employed for assessment of ex vivo healthy and basal cell carcinoma (BCC) tissue samples from human skin. Polarimetric analyses revealed that depolarization and retardance for healthy tissue group were significantly higher (p<0.001) compared to BCC tissue group. Histopathology indicated that these differences partially arise from BCC-related characteristic changes in tissue morphology. Wilks lambda statistics demonstrated the potential of all investigated polarimetric properties for computer assisted classification of the two tissue groups. Based on differences in polarimetric properties, partial least square (PLS) regression classified the samples with 100% accuracy, sensitivity and specificity. These findings indicate that optical polarimetry together with PLS statistics hold promise for automated pathology classification. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Terrestrial hyperspectral image shadow restoration through fusion with terrestrial lidar

    NASA Astrophysics Data System (ADS)

    Hartzell, Preston J.; Glennie, Craig L.; Finnegan, David C.; Hauser, Darren L.

    2017-05-01

    Recent advances in remote sensing technology have expanded the acquisition and fusion of active lidar and passive hyperspectral imagery (HSI) from exclusively airborne observations to include terrestrial modalities. In contrast to airborne collection geometry, hyperspectral imagery captured from terrestrial cameras is prone to extensive solar shadowing on vertical surfaces leading to reductions in pixel classification accuracies or outright removal of shadowed areas from subsequent analysis tasks. We demonstrate the use of lidar spatial information for sub-pixel HSI shadow detection and the restoration of shadowed pixel spectra via empirical methods that utilize sunlit and shadowed pixels of similar material composition. We examine the effectiveness of radiometrically calibrated lidar intensity in identifying these similar materials in sun and shade conditions and further evaluate a restoration technique that leverages ratios derived from the overlapping lidar laser and HSI wavelengths. Simulations of multiple lidar wavelengths, i.e., multispectral lidar, indicate the potential for HSI spectral restoration that is independent of the complexity and costs associated with rigorous radiometric transfer models, which have yet to be developed for horizontal-viewing terrestrial HSI sensors. The spectral restoration performance of shadowed HSI pixels is quantified for imagery of a geologic outcrop through improvements in spectral shape, spectral scale, and HSI band correlation.

  6. Decoding brain responses to pixelized images in the primary visual cortex: implications for visual cortical prostheses

    PubMed Central

    Guo, Bing-bing; Zheng, Xiao-lin; Lu, Zhen-gang; Wang, Xing; Yin, Zheng-qin; Hou, Wen-sheng; Meng, Ming

    2015-01-01

    Visual cortical prostheses have the potential to restore partial vision. Still limited by the low-resolution visual percepts provided by visual cortical prostheses, implant wearers can currently only “see” pixelized images, and how to obtain the specific brain responses to different pixelized images in the primary visual cortex (the implant area) is still unknown. We conducted a functional magnetic resonance imaging experiment on normal human participants to investigate the brain activation patterns in response to 18 different pixelized images. There were 100 voxels in the brain activation pattern that were selected from the primary visual cortex, and voxel size was 4 mm × 4 mm × 4 mm. Multi-voxel pattern analysis was used to test if these 18 different brain activation patterns were specific. We chose a Linear Support Vector Machine (LSVM) as the classifier in this study. The results showed that the classification accuracies of different brain activation patterns were significantly above chance level, which suggests that the classifier can successfully distinguish the brain activation patterns. Our results suggest that the specific brain activation patterns to different pixelized images can be obtained in the primary visual cortex using a 4 mm × 4 mm × 4 mm voxel size and a 100-voxel pattern. PMID:26692860

  7. Sub-pixel mapping of hyperspectral imagery using super-resolution

    NASA Astrophysics Data System (ADS)

    Sharma, Shreya; Sharma, Shakti; Buddhiraju, Krishna M.

    2016-04-01

    With the development of remote sensing technologies, it has become possible to obtain an overview of landscape elements which helps in studying the changes on earth's surface due to climate, geological, geomorphological and human activities. Remote sensing measures the electromagnetic radiations from the earth's surface and match the spectral similarity between the observed signature and the known standard signatures of the various targets. However, problem lies when image classification techniques assume pixels to be pure. In hyperspectral imagery, images have high spectral resolution but poor spatial resolution. Therefore, the spectra obtained is often contaminated due to the presence of mixed pixels and causes misclassification. To utilise this high spectral information, spatial resolution has to be enhanced. Many factors make the spatial resolution one of the most expensive and hardest to improve in imaging systems. To solve this problem, post-processing of hyperspectral images is done to retrieve more information from the already acquired images. The algorithm to enhance spatial resolution of the images by dividing them into sub-pixels is known as super-resolution and several researches have been done in this domain.In this paper, we propose a new method for super-resolution based on ant colony optimization and review the popular methods of sub-pixel mapping of hyperspectral images along with their comparative analysis.

  8. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    NASA Astrophysics Data System (ADS)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  9. Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Cao, Xiangyong; Zhou, Feng; Xu, Lin; Meng, Deyu; Xu, Zongben; Paisley, John

    2018-05-01

    This paper presents a new supervised classification algorithm for remotely sensed hyperspectral image (HSI) which integrates spectral and spatial information in a unified Bayesian framework. First, we formulate the HSI classification problem from a Bayesian perspective. Then, we adopt a convolutional neural network (CNN) to learn the posterior class distributions using a patch-wise training strategy to better use the spatial information. Next, spatial information is further considered by placing a spatial smoothness prior on the labels. Finally, we iteratively update the CNN parameters using stochastic gradient decent (SGD) and update the class labels of all pixel vectors using an alpha-expansion min-cut-based algorithm. Compared with other state-of-the-art methods, the proposed classification method achieves better performance on one synthetic dataset and two benchmark HSI datasets in a number of experimental settings.

  10. Quantitative image analysis of laminin immunoreactivity in skin basement membrane irradiated with 1 GeV/nucleon iron particles

    NASA Technical Reports Server (NTRS)

    Costes, S.; Streuli, C. H.; Barcellos-Hoff, M. H.

    2000-01-01

    We previously reported that laminin immunoreactivity in mouse mammary epithelium is altered shortly after whole-body irradiation with 0.8 Gy from 600 MeV/nucleon iron ions but is unaffected after exposure to sparsely ionizing radiation. This observation led us to propose that the effect could be due to protein damage from the high ionization density of the ion tracks. If so, we predicted that it would be evident soon after radiation exposure in basement membranes of other tissues and would depend on ion fluence. To test this hypothesis, we used immunofluorescence, confocal laser scanning microscopy, and image segmentation techniques to quantify changes in the basement membrane of mouse skin epidermis. At 1 h after exposure to 1 GeV/nucleon iron ions with doses from 0.03 to 1.6 Gy, neither the visual appearance nor the mean pixel intensity of laminin in the basement membrane of mouse dorsal skin epidermis was altered compared to sham-irradiated tissue. This result does not support the hypothesis that particle traversal directly affects laminin protein integrity. However, the mean pixel intensity of laminin immunoreactivity was significantly decreased in epidermal basement membrane at 48 and 96 h after exposure to 0.8 Gy 1 GeV/nucleon iron ions. We confirmed this effect with two additional antibodies raised against affinity-purified laminin 1 and the E3 fragment of the long-arm of laminin 1. In contrast, collagen type IV, another component of the basement membrane, was unaffected. Our studies demonstrate quantitatively that densely ionizing radiation elicits changes in skin microenvironments distinct from those induced by sparsely ionizing radiation. Such effects may might contribute to the carcinogenic potential of densely ionizing radiation by altering cellular signaling cascades mediated by cell-extracellular matrix interactions.

  11. Determination of Classification Accuracy for Land Use/cover Types Using Landsat-Tm Spot-Mss and Multipolarized and Multi-Channel Synthetic Aperture Radar

    NASA Astrophysics Data System (ADS)

    Dondurur, Mehmet

    The primary objective of this study was to determine the degree to which modern SAR systems can be used to obtain information about the Earth's vegetative resources. Information obtainable from microwave synthetic aperture radar (SAR) data was compared with that obtainable from LANDSAT-TM and SPOT data. Three hypotheses were tested: (a) Classification of land cover/use from SAR data can be accomplished on a pixel-by-pixel basis with the same overall accuracy as from LANDSAT-TM and SPOT data. (b) Classification accuracy for individual land cover/use classes will differ between sensors. (c) Combining information derived from optical and SAR data into an integrated monitoring system will improve overall and individual land cover/use class accuracies. The study was conducted with three data sets for the Sleeping Bear Dunes test site in the northwestern part of Michigan's lower peninsula, including an October 1982 LANDSAT-TM scene, a June 1989 SPOT scene and C-, L- and P-Band radar data from the Jet Propulsion Laboratory AIRSAR. Reference data were derived from the Michigan Resource Information System (MIRIS) and available color infrared aerial photos. Classification and rectification of data sets were done using ERDAS Image Processing Programs. Classification algorithms included Maximum Likelihood, Mahalanobis Distance, Minimum Spectral Distance, ISODATA, Parallelepiped, and Sequential Cluster Analysis. Classified images were rectified as necessary so that all were at the same scale and oriented north-up. Results were analyzed with contingency tables and percent correctly classified (PCC) and Cohen's Kappa (CK) as accuracy indices using CSLANT and ImagePro programs developed for this study. Accuracy analyses were based upon a 1.4 by 6.5 km area with its long axis east-west. Reference data for this subscene total 55,770 15 by 15 m pixels with sixteen cover types, including seven level III forest classes, three level III urban classes, two level II range classes, two water classes, one wetland class and one agriculture class. An initial analysis was made without correcting the 1978 MIRIS reference data to the different dates of the TM, SPOT and SAR data sets. In this analysis, highest overall classification accuracy (PCC) was 87% with the TM data set, with both SPOT and C-Band SAR at 85%, a difference statistically significant at the 0.05 level. When the reference data were corrected for land cover change between 1978 and 1991, classification accuracy with the C-Band SAR data increased to 87%. Classification accuracy differed from sensor to sensor for individual land cover classes, Combining sensors into hypothetical multi-sensor systems resulted in higher accuracies than for any single sensor. Combining LANDSAT -TM and C-Band SAR yielded an overall classification accuracy (PCC) of 92%. The results of this study indicate that C-Band SAR data provide an acceptable substitute for LANDSAT-TM or SPOT data when land cover information is desired of areas where cloud cover obscures the terrain. Even better results can be obtained by integrating TM and C-Band SAR data into a multi-sensor system.

  12. Pixel-based absorption correction for dual-tracer fluorescence imaging of receptor binding potential

    PubMed Central

    Kanick, Stephen C.; Tichauer, Kenneth M.; Gunn, Jason; Samkoe, Kimberley S.; Pogue, Brian W.

    2014-01-01

    Ratiometric approaches to quantifying molecular concentrations have been used for decades in microscopy, but have rarely been exploited in vivo until recently. One dual-tracer approach can utilize an untargeted reference tracer to account for non-specific uptake of a receptor-targeted tracer, and ultimately estimate receptor binding potential quantitatively. However, interpretation of the relative dynamic distribution kinetics is confounded by differences in local tissue absorption at the wavelengths used for each tracer. This study simulated the influence of absorption on fluorescence emission intensity and depth sensitivity at typical near-infrared fluorophore wavelength bands near 700 and 800 nm in mouse skin in order to correct for these tissue optical differences in signal detection. Changes in blood volume [1-3%] and hemoglobin oxygen saturation [0-100%] were demonstrated to introduce substantial distortions to receptor binding estimates (error > 30%), whereas sampled depth was relatively insensitive to wavelength (error < 6%). In response, a pixel-by-pixel normalization of tracer inputs immediately post-injection was found to account for spatial heterogeneities in local absorption properties. Application of the pixel-based normalization method to an in vivo imaging study demonstrated significant improvement, as compared with a reference tissue normalization approach. PMID:25360349

  13. Preferred color correction for digital LCD TVs

    NASA Astrophysics Data System (ADS)

    Kim, Kyoung Tae; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho

    2009-01-01

    Instead of colorimetirc color reproduction, preferred color correction is applied for digital TVs to improve subjective image quality. First step of the preferred color correction is to survey the preferred color coordinates of memory colors. This can be achieved by the off-line human visual tests. Next step is to extract pixels of memory colors representing skin, grass and sky. For the detected pixels, colors are shifted towards the desired coordinates identified in advance. This correction process may result in undesirable contours on the boundaries between the corrected and un-corrected areas. For digital TV applications, the process of extraction and correction should be applied in every frame of the moving images. This paper presents a preferred color correction method in LCH color space. Values of chroma and hue are corrected independently. Undesirable contours on the boundaries of correction are minimized. The proposed method change the coordinates of memory color pixels towards the target color coordinates. Amount of correction is determined based on the averaged coordinate of the extracted pixels. The proposed method maintains the relative color difference within memory color areas. Performance of the proposed method is evaluated using the paired comparison. Results of experiments indicate that the proposed method can reproduce perceptually pleasing images to viewers.

  14. Impact of atmospheric correction and image filtering on hyperspectral classification of tree species using support vector machine

    NASA Astrophysics Data System (ADS)

    Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko

    2015-01-01

    Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.

  15. Training set size, scale, and features in Geographic Object-Based Image Analysis of very high resolution unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue

    2015-04-01

    Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.

  16. Mapping and monitoring changes in vegetation communities of Jasper Ridge, CA, using spectral fractions derived from AVIRIS images

    NASA Technical Reports Server (NTRS)

    Sabol, Donald E., Jr.; Roberts, Dar A.; Adams, John B.; Smith, Milton O.

    1993-01-01

    An important application of remote sensing is to map and monitor changes over large areas of the land surface. This is particularly significant with the current interest in monitoring vegetation communities. Most of traditional methods for mapping different types of plant communities are based upon statistical classification techniques (i.e., parallel piped, nearest-neighbor, etc.) applied to uncalibrated multispectral data. Classes from these techniques are typically difficult to interpret (particularly to a field ecologist/botanist). Also, classes derived for one image can be very different from those derived from another image of the same area, making interpretation of observed temporal changes nearly impossible. More recently, neural networks have been applied to classification. Neural network classification, based upon spectral matching, is weak in dealing with spectral mixtures (a condition prevalent in images of natural surfaces). Another approach to mapping vegetation communities is based on spectral mixture analysis, which can provide a consistent framework for image interpretation. Roberts et al. (1990) mapped vegetation using the band residuals from a simple mixing model (the same spectral endmembers applied to all image pixels). Sabol et al. (1992b) and Roberts et al. (1992) used different methods to apply the most appropriate spectral endmembers to each image pixel, thereby allowing mapping of vegetation based upon the the different endmember spectra. In this paper, we describe a new approach to classification of vegetation communities based upon the spectra fractions derived from spectral mixture analysis. This approach was applied to three 1992 AVIRIS images of Jasper Ridge, California to observe seasonal changes in surface composition.

  17. Fusion of shallow and deep features for classification of high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Gao, Lang; Tian, Tian; Sun, Xiao; Li, Hang

    2018-02-01

    Effective spectral and spatial pixel description plays a significant role for the classification of high resolution remote sensing images. Current approaches of pixel-based feature extraction are of two main kinds: one includes the widelyused principal component analysis (PCA) and gray level co-occurrence matrix (GLCM) as the representative of the shallow spectral and shape features, and the other refers to the deep learning-based methods which employ deep neural networks and have made great promotion on classification accuracy. However, the former traditional features are insufficient to depict complex distribution of high resolution images, while the deep features demand plenty of samples to train the network otherwise over fitting easily occurs if only limited samples are involved in the training. In view of the above, we propose a GLCM-based convolution neural network (CNN) approach to extract features and implement classification for high resolution remote sensing images. The employment of GLCM is able to represent the original images and eliminate redundant information and undesired noises. Meanwhile, taking shallow features as the input of deep network will contribute to a better guidance and interpretability. In consideration of the amount of samples, some strategies such as L2 regularization and dropout methods are used to prevent over-fitting. The fine-tuning strategy is also used in our study to reduce training time and further enhance the generalization performance of the network. Experiments with popular data sets such as PaviaU data validate that our proposed method leads to a performance improvement compared to individual involved approaches.

  18. A new Fourier transform based CBIR scheme for mammographic mass classification: a preliminary invariance assessment

    NASA Astrophysics Data System (ADS)

    Gundreddy, Rohith Reddy; Tan, Maxine; Qui, Yuchen; Zheng, Bin

    2015-03-01

    The purpose of this study is to develop and test a new content-based image retrieval (CBIR) scheme that enables to achieve higher reproducibility when it is implemented in an interactive computer-aided diagnosis (CAD) system without significantly reducing lesion classification performance. This is a new Fourier transform based CBIR algorithm that determines image similarity of two regions of interest (ROI) based on the difference of average regional image pixel value distribution in two Fourier transform mapped images under comparison. A reference image database involving 227 ROIs depicting the verified soft-tissue breast lesions was used. For each testing ROI, the queried lesion center was systematically shifted from 10 to 50 pixels to simulate inter-user variation of querying suspicious lesion center when using an interactive CAD system. The lesion classification performance and reproducibility as the queried lesion center shift were assessed and compared among the three CBIR schemes based on Fourier transform, mutual information and Pearson correlation. Each CBIR scheme retrieved 10 most similar reference ROIs and computed a likelihood score of the queried ROI depicting a malignant lesion. The experimental results shown that three CBIR schemes yielded very comparable lesion classification performance as measured by the areas under ROC curves with the p-value greater than 0.498. However, the CBIR scheme using Fourier transform yielded the highest invariance to both queried lesion center shift and lesion size change. This study demonstrated the feasibility of improving robustness of the interactive CAD systems by adding a new Fourier transform based image feature to CBIR schemes.

  19. The performance improvement of automatic classification among obstructive lung diseases on the basis of the features of shape analysis, in addition to texture analysis at HRCT

    NASA Astrophysics Data System (ADS)

    Lee, Youngjoo; Kim, Namkug; Seo, Joon Beom; Lee, JuneGoo; Kang, Suk Ho

    2007-03-01

    In this paper, we proposed novel shape features to improve classification performance of differentiating obstructive lung diseases, based on HRCT (High Resolution Computerized Tomography) images. The images were selected from HRCT images, obtained from 82 subjects. For each image, two experienced radiologists selected rectangular ROIs with various sizes (16x16, 32x32, and 64x64 pixels), representing each disease or normal lung parenchyma. Besides thirteen textural features, we employed additional seven shape features; cluster shape features, and Top-hat transform features. To evaluate the contribution of shape features for differentiation of obstructive lung diseases, several experiments were conducted with two different types of classifiers and various ROI sizes. For automated classification, the Bayesian classifier and support vector machine (SVM) were implemented. To assess the performance and cross-validation of the system, 5-folding method was used. In comparison to employing only textural features, adding shape features yields significant enhancement of overall sensitivity(5.9, 5.4, 4.4% in the Bayesian and 9.0, 7.3, 5.3% in the SVM), in the order of ROI size 16x16, 32x32, 64x64 pixels, respectively (t-test, p<0.01). Moreover, this enhancement was largely due to the improvement on class-specific sensitivity of mild centrilobular emphysema and bronchiolitis obliterans which are most hard to differentiate for radiologists. According to these experimental results, adding shape features to conventional texture features is much useful to improve classification performance of obstructive lung diseases in both Bayesian and SVM classifiers.

  20. Skin colour typology and suntanning pathways.

    PubMed

    Chardon, A; Cretois, I; Hourseau, C

    1991-08-01

    Synopsis The evaluation of sun-product efficacy, with laboratory solar simulators or in actual sun, implicates clinical and subjective assessment of the various skin responses in terms of wavelengths constitutive of solar light. These photobiological responses vary according to skin types and particularly to basic skin melanic content, i.e. with skin colour. Now, the instrumental measurement of live skin colour has become easier to perform, fast and reliable. Based on the standard CIE-L*a*b* colour system and correlated with the human eye, this technique was used to define the skin colour domain of the caucasian population, to propose a skin colour classification, and then to objectively follow, over a three week period, the dynamics and kinetics of tanning induced by UVB, UVA and UVB +/- A multi-exposures on the three skin categories. The specific directions in the three-dimensional L*a*b* colour space of the tanning components, i.e. erythema, immediate pigmentation and constitutional melanization, as well as the resulting tanning pathways, were analysed and defined in the three-dimensional colour space, using a vectorial method. The UVB, UVA and UVB +/- A tannings were differentiated by their intensity, their hue and especially their lasting capacity: UVA tanning clearly appeared more lasting than UVB. In addition, the UVA*UVB interaction on tanning intensity was not found to be significant. With the skin colour classification and the tanning models, this comprehensive study supplies a basic tool for the colorimetric interpretation of the skin phenomena involved, provided that this interpretation is always considered in the three dimensions of the colour space. It also suggests some useful practical applications for sun product formulation and evaluation.

  1. Modification of the Fitzpatrick system of skin phototype classification for the Indian population, and its correlation with narrowband diffuse reflectance spectrophotometry.

    PubMed

    Sharma, V K; Gupta, V; Jangid, B L; Pathak, M

    2018-04-01

    The Fitzpatrick classification for skin phototyping is widely used, but its usefulness in dark-skinned populations has been questioned by some researchers. Recently, skin colour measurement has been proposed for phototyping skin colour objectively. To modify the Fitzpatrick system of skin phototyping for the Indian population and to study its correlation with skin colour using narrowband diffuse reflectance spectrophotometry METHODS: Answer choices for three items (eye colour, hair colour, colour of unexposed skin) out of 10 in the original Fitzpatrick questionnaire were modified, followed by self-administration of the original and the modified Fitzpatrick questionnaire by 70 healthy Indian volunteers. Skin colour (melanin and erythema indices) was measured from two photoexposed and two photoprotected sites using a narrowband reflectance spectrophotometer. The mean ± SD scores for the original and modified Fitzpatrick questionnaires were 25.40 ± 4.49 and 23.89 ± 4.82, respectively (r = 0.97, P < 0.001). The two items related to tanning habits were deemed irrelevant based on the subjects' response and were removed from the modified questionnaire. The Melanin Index (MI) of all sites correlated moderately well with both the modified (r = 0.61-0.64, P < 0.001) and original Fitzpatrick questionnaire scores (r = 0.64-0.67, P < 0.001), while the Erythema Index showed poor correlation with both. An MI value of ≧42 was found to be the cut-off between skin phototypes I-III and IV, and ≥ 47 between IV and V-VI. Our modification of the Fitzpatrick questionnaire makes it more relevant to the Indian population. Spectrophotometry can be a useful objective tool for skin phototyping. © 2018 British Association of Dermatologists.

  2. Multi-temporal sub-pixel landsat ETM+ classification of isolated wetlands in Cuyahoga County, Ohio, USA

    EPA Science Inventory

    The goal of this project was to determine the utility of subpixel processing of multi-temporal Landsat Enhanced Thematic Mapper Plus (ETM+) data for the detection of isolated wetlands greater than 0.50 acres in Cuyahoga County, located in the Erie Drift Plains ecoregion of northe...

  3. Trophic classification of Tennessee Valley area reservoirs derived from LANDSAT multispectral scanner data. [Alabama, Georgia, Kentucky, Tennessee, and North Carolina

    NASA Technical Reports Server (NTRS)

    Meinert, D. L.; Malone, D. L.; Voss, A. W. (Principal Investigator); Scarpace, F. L.

    1980-01-01

    LANDSAT MSS data from four different dates were extracted from computer tapes using a semiautomated digital data handling and analysis system. Reservoirs were extracted from the surrounding land matrix by using a Band 7 density level slice of 3; and descriptive statistics to include mean, variance, and ratio between bands for each of the four bands were calculated. Significant correlations ( 0.80) were identified between the MSS statistics and many trophic indicators from ground truth water quality data collected at 35 reservoirs in the greater Tennessee Valley region. Regression models were developed which gave significant estimates of each reservoir's trophic state as defined by its trophic state index and explained in all four LANDSAT frames at least 85 percent of the variability in the data. To illustrate the spatial variations within reservoirs as well as the relative variations between reservoirs, a table look up elliptical classification was used in conjunction with each reservoir's trophic state index to classify each reservoir on a pixel by pixel basis and produce color coded thematic representations.

  4. Statistical Properties of Echosignal Obtained from Human Dermis In Vivo

    NASA Astrophysics Data System (ADS)

    Piotrzkowska, Hanna; Litniewski, Jerzy; Nowicki, Andrzej; Szymańska, Elżbieta

    The paper presents the classification of the healthy skin and the skin lesions (basal cell carcinoma and actinic keratosis), basing on the statistical parameters of the envelope of ultrasonic echoes. The envelope was modeled using Rayleigh and non-Rayleigh (K-distribution) statistics. Furthermore, the characteristic parameter of the K-distribution, the effective number of scatterers was investigated. Also the attenuation coefficient was used for the skin lesion assessment.

  5. Combined empirical mode decomposition and texture features for skin lesion classification using quadratic support vector machine.

    PubMed

    Wahba, Maram A; Ashour, Amira S; Napoleon, Sameh A; Abd Elnaby, Mustafa M; Guo, Yanhui

    2017-12-01

    Basal cell carcinoma is one of the most common malignant skin lesions. Automated lesion identification and classification using image processing techniques is highly required to reduce the diagnosis errors. In this study, a novel technique is applied to classify skin lesion images into two classes, namely the malignant Basal cell carcinoma and the benign nevus. A hybrid combination of bi-dimensional empirical mode decomposition and gray-level difference method features is proposed after hair removal. The combined features are further classified using quadratic support vector machine (Q-SVM). The proposed system has achieved outstanding performance of 100% accuracy, sensitivity and specificity compared to other support vector machine procedures as well as with different extracted features. Basal Cell Carcinoma is effectively classified using Q-SVM with the proposed combined features.

  6. Classification System for Individualized Treatment of Adult Buried Penis Syndrome.

    PubMed

    Tausch, Timothy J; Tachibana, Isamu; Siegel, Jordan A; Hoxworth, Ronald; Scott, Jeremy M; Morey, Allen F

    2016-09-01

    The authors present their experience with reconstructive strategies for men with various manifestations of adult buried penis syndrome, and propose a comprehensive anatomical classification system and treatment algorithm based on pathologic changes in the penile skin and involvement of neighboring abdominal and/or scrotal components. The authors reviewed all patients who underwent reconstruction of adult buried penis syndrome at their referral center between 2007 and 2015. Patients were stratified by location and severity of involved anatomical components. Procedures performed, demographics, comorbidities, and clinical outcomes were reviewed. Fifty-six patients underwent reconstruction of buried penis at the authors' center from 2007 to 2015. All procedures began with a ventral penile release. If the uncovered penile skin was determined to be viable, a phalloplasty was performed by anchoring penoscrotal skin to the proximal shaft, and the ventral shaft skin defect was closed with scrotal flaps. In more complex patients with circumferential nonviable penile skin, the penile skin was completely excised and replaced with a split-thickness skin graft. Complex patients with severe abdominal lipodystrophy required adjacent tissue transfer. For cases of genital lymphedema, the procedure involved complete excision of the lymphedematous tissue, and primary closure with or without a split-thickness skin graft, also often involving the scrotum. The authors' overall success rate was 88 percent (49 of 56), defined as resolution of symptoms without the need for additional procedures. Successful correction of adult buried penis often necessitates an interdisciplinary, multimodal approach. Therapeutic, IV.

  7. Per-point and per-field contextual classification of multipolarization and multiple incidence angle aircraft L-band radar data

    NASA Technical Reports Server (NTRS)

    Hoffer, Roger M.; Hussin, Yousif Ali

    1989-01-01

    Multipolarized aircraft L-band radar data are classified using two different image classification algorithms: (1) a per-point classifier, and (2) a contextual, or per-field, classifier. Due to the distinct variations in radar backscatter as a function of incidence angle, the data are stratified into three incidence-angle groupings, and training and test data are defined for each stratum. A low-pass digital mean filter with varied window size (i.e., 3x3, 5x5, and 7x7 pixels) is applied to the data prior to the classification. A predominately forested area in northern Florida was the study site. The results obtained by using these image classifiers are then presented and discussed.

  8. Memory color assisted illuminant estimation through pixel clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Heng; Quan, Shuxue

    2010-01-01

    The under constrained nature of illuminant estimation determines that in order to resolve the problem, certain assumptions are needed, such as the gray world theory. Including more constraints in this process may help explore the useful information in an image and improve the accuracy of the estimated illuminant, providing that the constraints hold. Based on the observation that most personal images have contents of one or more of the following categories: neutral objects, human beings, sky, and plants, we propose a method for illuminant estimation through the clustering of pixels of gray and three dominant memory colors: skin tone, sky blue, and foliage green. Analysis shows that samples of the above colors cluster around small areas under different illuminants and their characteristics can be used to effectively detect pixels falling into each of the categories. The algorithm requires the knowledge of the spectral sensitivity response of the camera, and a spectral database consisted of the CIE standard illuminants and reflectance or radiance database of samples of the above colors.

  9. Regional shape-based feature space for segmenting biomedical images using neural networks

    NASA Astrophysics Data System (ADS)

    Sundaramoorthy, Gopal; Hoford, John D.; Hoffman, Eric A.

    1993-07-01

    In biomedical images, structure of interest, particularly the soft tissue structures, such as the heart, airways, bronchial and arterial trees often have grey-scale and textural characteristics similar to other structures in the image, making it difficult to segment them using only gray- scale and texture information. However, these objects can be visually recognized by their unique shapes and sizes. In this paper we discuss, what we believe to be, a novel, simple scheme for extracting features based on regional shapes. To test the effectiveness of these features for image segmentation (classification), we use an artificial neural network and a statistical cluster analysis technique. The proposed shape-based feature extraction algorithm computes regional shape vectors (RSVs) for all pixels that meet a certain threshold criteria. The distance from each such pixel to a boundary is computed in 8 directions (or in 26 directions for a 3-D image). Together, these 8 (or 26) values represent the pixel's (or voxel's) RSV. All RSVs from an image are used to train a multi-layered perceptron neural network which uses these features to 'learn' a suitable classification strategy. To clearly distinguish the desired object from other objects within an image, several examples from inside and outside the desired object are used for training. Several examples are presented to illustrate the strengths and weaknesses of our algorithm. Both synthetic and actual biomedical images are considered. Future extensions to this algorithm are also discussed.

  10. Superpixel-based classification of gastric chromoendoscopy images

    NASA Astrophysics Data System (ADS)

    Boschetto, Davide; Grisan, Enrico

    2017-03-01

    Chromoendoscopy (CH) is a gastroenterology imaging modality that involves the staining of tissues with methylene blue, which reacts with the internal walls of the gastrointestinal tract, improving the visual contrast in mucosal surfaces and thus enhancing a doctor's ability to screen precancerous lesions or early cancer. This technique helps identify areas that can be targeted for biopsy or treatment and in this work we will focus on gastric cancer detection. Gastric chromoendoscopy for cancer detection has several taxonomies available, one of which classifies CH images into three classes (normal, metaplasia, dysplasia) based on color, shape and regularity of pit patterns. Computer-assisted diagnosis is desirable to help us improve the reliability of the tissue classification and abnormalities detection. However, traditional computer vision methodologies, mainly segmentation, do not translate well to the specific visual characteristics of a gastroenterology imaging scenario. We propose the exploitation of a first unsupervised segmentation via superpixel, which groups pixels into perceptually meaningful atomic regions, used to replace the rigid structure of the pixel grid. For each superpixel, a set of features is extracted and then fed to a random forest based classifier, which computes a model used to predict the class of each superpixel. The average general accuracy of our model is 92.05% in the pixel domain (86.62% in the superpixel domain), while detection accuracies on the normal and abnormal class are respectively 85.71% and 95%. Eventually, the whole image class can be predicted image through a majority vote on each superpixel's predicted class.

  11. An Investigation of Automatic Change Detection for Topographic Map Updating

    NASA Astrophysics Data System (ADS)

    Duncan, P.; Smit, J.

    2012-08-01

    Changes to the landscape are constantly occurring and it is essential for geospatial and mapping organisations that these changes are regularly detected and captured, so that map databases can be updated to reflect the current status of the landscape. The Chief Directorate of National Geospatial Information (CD: NGI), South Africa's national mapping agency, currently relies on manual methods of detecting changes and capturing these changes. These manual methods are time consuming and labour intensive, and rely on the skills and interpretation of the operator. It is therefore necessary to move towards more automated methods in the production process at CD: NGI. The aim of this research is to do an investigation into a methodology for automatic or semi-automatic change detection for the purpose of updating topographic databases. The method investigated for detecting changes is through image classification as well as spatial analysis and is focussed on urban landscapes. The major data input into this study is high resolution aerial imagery and existing topographic vector data. Initial results indicate the traditional pixel-based image classification approaches are unsatisfactory for large scale land-use mapping and that object-orientated approaches hold more promise. Even in the instance of object-oriented image classification generalization of techniques on a broad-scale has provided inconsistent results. A solution may lie with a hybrid approach of pixel and object-oriented techniques.

  12. Improving Spectral Image Classification through Band-Ratio Optimization and Pixel Clustering

    NASA Astrophysics Data System (ADS)

    O'Neill, M.; Burt, C.; McKenna, I.; Kimblin, C.

    2017-12-01

    The Underground Nuclear Explosion Signatures Experiment (UNESE) seeks to characterize non-prompt observables from underground nuclear explosions (UNE). As part of this effort, we evaluated the ability of DigitalGlobe's WorldView-3 (WV3) to detect and map UNE signatures. WV3 is the current state-of-the-art, commercial, multispectral imaging satellite; however, it has relatively limited spectral and spatial resolutions. These limitations impede image classifiers from detecting targets that are spatially small and lack distinct spectral features. In order to improve classification results, we developed custom algorithms to reduce false positive rates while increasing true positive rates via a band-ratio optimization and pixel clustering front-end. The clusters resulting from these algorithms were processed with standard spectral image classifiers such as Mixture-Tuned Matched Filter (MTMF) and Adaptive Coherence Estimator (ACE). WV3 and AVIRIS data of Cuprite, Nevada, were used as a validation data set. These data were processed with a standard classification approach using MTMF and ACE algorithms. They were also processed using the custom front-end prior to the standard approach. A comparison of the results shows that the custom front-end significantly increases the true positive rate and decreases the false positive rate.This work was done by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy. DOE/NV/25946-3283.

  13. A nanofiber based artificial electronic skin with high pressure sensitivity and 3D conformability

    NASA Astrophysics Data System (ADS)

    Zhong, Weibin; Liu, Qiongzhen; Wu, Yongzhi; Wang, Yuedan; Qing, Xing; Li, Mufang; Liu, Ke; Wang, Wenwen; Wang, Dong

    2016-06-01

    Pressure sensors with 3D conformability are highly desirable components for artificial electronic skin or e-textiles that can mimic natural skin, especially for application in real-time monitoring of human physiological signals. Here, a nanofiber based electronic skin with ultra-high pressure sensitivity and 3D conformability is designed and built by interlocking two elastic patterned nanofibrous membranes. The patterned membrane is facilely prepared by casting conductive nanofiber ink into a silicon mould to form an array of semi-spheroid-like protuberances. The protuberances composed of intertwined elastic POE nanofibers and PPy@PVA-co-PE nanofibers afford a tunable effective elastic modulus that is capable of capturing varied strains and stresses, thereby contributing to a high sensitivity for pressure sensing. This electronic skin-like sensor demonstrates an ultra-high sensitivity (1.24 kPa-1) below 150 Pa with a detection limit as low as about 1.3 Pa. The pixelated sensor array and a RGB-LED light are then assembled into a circuit and show a feasibility for visual detection of spatial pressure. Furthermore, a nanofiber based proof-of-concept wireless pressure sensor with a bluetooth module as a signal transmitter is proposed and has demonstrated great promise for wireless monitoring of human physiological signals, indicating a potential for large scale wearable electronic devices or e-skin.Pressure sensors with 3D conformability are highly desirable components for artificial electronic skin or e-textiles that can mimic natural skin, especially for application in real-time monitoring of human physiological signals. Here, a nanofiber based electronic skin with ultra-high pressure sensitivity and 3D conformability is designed and built by interlocking two elastic patterned nanofibrous membranes. The patterned membrane is facilely prepared by casting conductive nanofiber ink into a silicon mould to form an array of semi-spheroid-like protuberances. The protuberances composed of intertwined elastic POE nanofibers and PPy@PVA-co-PE nanofibers afford a tunable effective elastic modulus that is capable of capturing varied strains and stresses, thereby contributing to a high sensitivity for pressure sensing. This electronic skin-like sensor demonstrates an ultra-high sensitivity (1.24 kPa-1) below 150 Pa with a detection limit as low as about 1.3 Pa. The pixelated sensor array and a RGB-LED light are then assembled into a circuit and show a feasibility for visual detection of spatial pressure. Furthermore, a nanofiber based proof-of-concept wireless pressure sensor with a bluetooth module as a signal transmitter is proposed and has demonstrated great promise for wireless monitoring of human physiological signals, indicating a potential for large scale wearable electronic devices or e-skin. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr02678h

  14. Classification and recognition of texture collagen obtaining by multiphoton microscope with neural network analysis

    NASA Astrophysics Data System (ADS)

    Wu, Shulian; Peng, Yuanyuan; Hu, Liangjun; Zhang, Xiaoman; Li, Hui

    2016-01-01

    Second harmonic generation microscopy (SHGM) was used to monitor the process of chronological aging skin in vivo. The collagen structures of mice model with different ages were obtained using SHGM. Then, texture feature with contrast, correlation and entropy were extracted and analysed using the grey level co-occurrence matrix. At last, the neural network tool of Matlab was applied to train the texture of collagen in different statues during the aging process. And the simulation of mice collagen texture was carried out. The results indicated that the classification accuracy reach 85%. Results demonstrated that the proposed approach effectively detected the target object in the collagen texture image during the chronological aging process and the analysis tool based on neural network applied the skin of classification and feature extraction method is feasible.

  15. Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Li, X.; Xiao, W.

    2018-05-01

    The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.

  16. The possibilities of improvement in the sensitivity of cancer fluorescence diagnostics by computer image processing

    NASA Astrophysics Data System (ADS)

    Ledwon, Aleksandra; Bieda, Robert; Kawczyk-Krupka, Aleksandra; Polanski, Andrzej; Wojciechowski, Konrad; Latos, Wojciech; Sieron-Stoltny, Karolina; Sieron, Aleksander

    2008-02-01

    Background: Fluorescence diagnostics uses the ability of tissues to fluoresce after exposition to a specific wavelength of light. The change in fluorescence between normal and progression to cancer allows to see early cancer and precancerous lesions often missed by white light. Aim: To improve by computer image processing the sensitivity of fluorescence images obtained during examination of skin, oral cavity, vulva and cervix lesions, during endoscopy, cystoscopy and bronchoscopy using Xillix ONCOLIFE. Methods: Function of image f(x,y):R2 --> R 3 was transformed from original color space RGB to space in which vector of 46 values refers to every point labeled by defined xy-coordinates- f(x,y):R2 --> R 46. By means of Fisher discriminator vector of attributes of concrete point analalyzed in the image was reduced according to two defined classes defined as pathologic areas (foreground) and healthy areas (background). As a result the highest four fisher's coefficients allowing the greatest separation between points of pathologic (foreground) and healthy (background) areas were chosen. In this way new function f(x,y):R2 --> R 4 was created in which point x,y corresponds with vector Y, H, a*, c II. In the second step using Gaussian Mixtures and Expectation-Maximisation appropriate classificator was constructed. This classificator enables determination of probability that the selected pixel of analyzed image is a pathologically changed point (foreground) or healthy one (background). Obtained map of probability distribution was presented by means of pseudocolors. Results: Image processing techniques improve the sensitivity, quality and sharpness of original fluorescence images. Conclusion: Computer image processing enables better visualization of suspected areas examined by means of fluorescence diagnostics.

  17. Efficient detection of wound-bed and peripheral skin with statistical colour models.

    PubMed

    Veredas, Francisco J; Mesa, Héctor; Morente, Laura

    2015-04-01

    A pressure ulcer is a clinical pathology of localised damage to the skin and underlying tissue caused by pressure, shear or friction. Reliable diagnosis supported by precise wound evaluation is crucial in order to success on treatment decisions. This paper presents a computer-vision approach to wound-area detection based on statistical colour models. Starting with a training set consisting of 113 real wound images, colour histogram models are created for four different tissue types. Back-projections of colour pixels on those histogram models are used, from a Bayesian perspective, to get an estimate of the posterior probability of a pixel to belong to any of those tissue classes. Performance measures obtained from contingency tables based on a gold standard of segmented images supplied by experts have been used for model selection. The resulting fitted model has been validated on a training set consisting of 322 wound images manually segmented and labelled by expert clinicians. The final fitted segmentation model shows robustness and gives high mean performance rates [(AUC: .9426 (SD .0563); accuracy: .8777 (SD .0799); F-score: 0.7389 (SD .1550); Cohen's kappa: .6585 (SD .1787)] when segmenting significant wound areas that include healing tissues.

  18. Techniques for delineation and portrayal of land cover types using ERTS-1 data. [Pennsylvania, Montana, and Texas

    NASA Technical Reports Server (NTRS)

    Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator); Wilson, A. D.

    1974-01-01

    The author has identified the following significant results. ERTS data was used to map land cover in agricultural areas, although in some parts of Pennsylvania, with small irregular fields, many of the pixels overlap field boundaries and cause difficulties in classification. Various techniques and devices were used to display the results of these land cover analyses. The most promising approach would be a user-interactive color monitor interfaced with a large computer so that classification results could be displayed on the CRT and these results output by a hard complete copier.

  19. Local neighborhood transition probability estimation and its use in contextual classification

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of incorporating spatial or contextual information into classifications is considered. A simple model that describes the spatial dependencies between the neighboring pixels with a single parameter, Theta, is presented. Expressions are derived for updating the posteriori probabilities of the states of nature of the pattern under consideration using information from the neighboring patterns, both for spatially uniform context and for Markov dependencies in terms of Theta. Techniques for obtaining the optimal value of the parameter Theta as a maximum likelihood estimate from the local neighborhood of the pattern under consideration are developed.

  20. Changes in nuclear morphology and chromatin texture of basal keratinocytes in melasma.

    PubMed

    Brianezi, G; Handel, A C; Schmitt, J V; Miot, L D B; Miot, H A

    2015-04-01

    The pathogenesis of melasma and the role of keratinocytes in disease development and maintenance are not completely understood. Dermal abnormalities, the expression of inflammatory mediators, growth factors, epithelial expression of melanocortin and sexual hormones receptors suggest that not only melanocytes, but entire epidermal melanin unit is involved in melasma physiopathology. To compare nuclear morphological features and chromatin texture between basal keratinocytes in facial melasma and adjacent normal skin. We took facial skin biopsies (2 mm melasma and adjacent normal skin) from women processed for haematoxylin and eosin. Thirty non-overlapping basal keratinocyte nuclei were segmented and descriptors of area, highest diameter, perimeter, circularity, pixel intensity, profilometric index (Ra) and fractal dimension were extracted using ImageJ software. Basal keratinocyte nuclei from facial melasma epidermis displayed larger size, irregular shape, hyperpigmentation and chromatin heterogeneity by fractal dimension than perilesional skin. Basal keratinocytes from facial melasma display changes in nuclear form and chromatin texture, suggesting that the phenotype differences between melasma and adjacent facial skin can result from complete epidermal melanin unit alterations, not just hypertrophic melanocytes. © 2014 European Academy of Dermatology and Venereology.

  1. Genetic skin disorders.

    PubMed

    Moss, C

    2000-11-01

    Neonatologists do not require a detailed knowledge of all genetic skin disorders but need to recognize one if they see it. The unique accessibility of the skin makes it possible to observe the physical signs and deduce the child's immediate needs from first principles. The morphological classification given here will help the nondermatologist establish a clinical diagnosis. Tremendous advances over the last 10 years in understanding the molecular basis of skin disease make it possible, in many cases, to confirm the diagnosis and to counsel the family accurately. Copyright 2000 Harcourt Publishers Ltd.

  2. A Comparison of Local Variance, Fractal Dimension, and Moran's I as Aids to Multispectral Image Classification

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.

    2004-01-01

    The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.

  3. Analysing land cover and land use change in the Matobo National Park and surroundings in Zimbabwe

    NASA Astrophysics Data System (ADS)

    Scharsich, Valeska; Mtata, Kupakwashe; Hauhs, Michael; Lange, Holger; Bogner, Christina

    2016-04-01

    Natural forests are threatened worldwide, therefore their protection in National Parks is essential. Here, we investigate how this protection status affects the land cover. To answer this question, we analyse the surface reflectance of three Landsat images of Matobo National Park and surrounding in Zimbabwe from 1989, 1998 and 2014 to detect changes in land cover in this region. To account for the rolling countryside and the resulting prominent shadows, a topographical correction of the surface reflectance was required. To infer land cover changes it is not only necessary to have some ground data for the current satellite images but also for the old ones. In particular for the older images no recent field study could help to reconstruct these data reliably. In our study we follow the idea that land cover classes of pixels in current images can be transferred to the equivalent pixels of older ones if no changes occurred meanwhile. Therefore we combine unsupervised clustering with supervised classification as follows. At first, we produce a land cover map for 2014. Secondly, we cluster the images with clara, which is similar to k-means, but suitable for large data sets. Whereby the best number of classes were determined to be 4. Thirdly, we locate unchanged pixels with change vector analysis in the images of 1989 and 1998. For these pixels we transfer the corresponding cluster label from 2014 to 1989 and 1998. Subsequently, the classified pixels serve as training data for supervised classification with random forest, which is carried out for each image separately. Finally, we derive land cover classes from the Landsat image in 2014, photographs and Google Earth and transfer them to the other two images. The resulting classes are shrub land; forest/shallow waters; bare soils/fields with some trees/shrubs; and bare light soils/rocks, fields and settlements. Subsequently the three different classifications are compared and land changes are mapped. The main changes are observable in the surroundings of the National Park, especially the common lands have lost their clear boundaries with time. In the National Park, the area of forest increases from 1989 to 2014 from 58% to 61% whereas the area of shrub land decreases by the same amount. The amount of each of the other two classes remains constant. These changes indicate an actual effect of the protection status of the National Park. In our study remote sensing data are the main source to evaluate the effects and the benefits of a protected area without on-side studies. This could be important for regions, where field studies are not possible because of insecure political conditions and only remote sensing data are available.

  4. 7 CFR 51.1404 - Tolerances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... the grade other than for skin color. (3) For loose extraneous or foreign material, by weight. (i) 0.5... requirements for the grade or any specified color classification, including therein not more than 7 percent for... meet the color requirements for the grade or for any specified color classification, but which are not...

  5. The Raman spectrum character of skin tumor induced by UVB

    NASA Astrophysics Data System (ADS)

    Wu, Shulian; Hu, Liangjun; Wang, Yunxia; Li, Yongzeng

    2016-03-01

    In our study, the skin canceration processes induced by UVB were analyzed from the perspective of tissue spectrum. A home-made Raman spectral system with a millimeter order excitation laser spot size combined with a multivariate statistical analysis for monitoring the skin changed irradiated by UVB was studied and the discrimination were evaluated. Raman scattering signals of the SCC and normal skin were acquired. Spectral differences in Raman spectra were revealed. Linear discriminant analysis (LDA) based on principal component analysis (PCA) were employed to generate diagnostic algorithms for the classification of skin SCC and normal. The results indicated that Raman spectroscopy combined with PCA-LDA demonstrated good potential for improving the diagnosis of skin cancers.

  6. Highly stretchable electroluminescent skin for optical signaling and tactile sensing.

    PubMed

    Larson, C; Peele, B; Li, S; Robinson, S; Totaro, M; Beccai, L; Mazzolai, B; Shepherd, R

    2016-03-04

    Cephalopods such as octopuses have a combination of a stretchable skin and color-tuning organs to control both posture and color for visual communication and disguise. We present an electroluminescent material that is capable of large uniaxial stretching and surface area changes while actively emitting light. Layers of transparent hydrogel electrodes sandwich a ZnS phosphor-doped dielectric elastomer layer, creating thin rubber sheets that change illuminance and capacitance under deformation. Arrays of individually controllable pixels in thin rubber sheets were fabricated using replica molding and were subjected to stretching, folding, and rolling to demonstrate their use as stretchable displays. These sheets were then integrated into the skin of a soft robot, providing it with dynamic coloration and sensory feedback from external and internal stimuli. Copyright © 2016, American Association for the Advancement of Science.

  7. Detection and inpainting of facial wrinkles using texture orientation fields and Markov random field modeling.

    PubMed

    Batool, Nazre; Chellappa, Rama

    2014-09-01

    Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.

  8. Assessment, prevention and management of skin tears.

    PubMed

    Benbow, Maureen

    2017-04-28

    Skin tears are common in older people. They are acute wounds that are at high risk of becoming complex, chronic wounds due to the interplay between the physiological changes in the skin and trauma from the external environment. Skin tears have been reported to have prevalence rates equal to, or greater than, those for pressure ulcers. A comprehensive risk assessment should include assessment of the individual's general health (chronic/critical disease, polypharmacy and cognitive, sensory and nutritional status); mobility (history of falls, impaired mobility, dependent activities of daily living, and mechanical trauma); and skin (extremes of age, fragile skin and previous skin tears). A recognised classification system should be used to identify and document skin tears and guide treatment decisions in line with local wound management protocols. Nurses and carers are in a prime position to prevent, assess and manage skin tears.

  9. Photoacoustic discrimination of vascular and pigmented lesions using classical and Bayesian methods

    NASA Astrophysics Data System (ADS)

    Swearingen, Jennifer A.; Holan, Scott H.; Feldman, Mary M.; Viator, John A.

    2010-01-01

    Discrimination of pigmented and vascular lesions in skin can be difficult due to factors such as size, subungual location, and the nature of lesions containing both melanin and vascularity. Misdiagnosis may lead to precancerous or cancerous lesions not receiving proper medical care. To aid in the rapid and accurate diagnosis of such pathologies, we develop a photoacoustic system to determine the nature of skin lesions in vivo. By irradiating skin with two laser wavelengths, 422 and 530 nm, we induce photoacoustic responses, and the relative response at these two wavelengths indicates whether the lesion is pigmented or vascular. This response is due to the distinct absorption spectrum of melanin and hemoglobin. In particular, pigmented lesions have ratios of photoacoustic amplitudes of approximately 1.4 to 1 at the two wavelengths, while vascular lesions have ratios of about 4.0 to 1. Furthermore, we consider two statistical methods for conducting classification of lesions: standard multivariate analysis classification techniques and a Bayesian-model-based approach. We study 15 human subjects with eight vascular and seven pigmented lesions. Using the classical method, we achieve a perfect classification rate, while the Bayesian approach has an error rate of 20%.

  10. Segmentation and classification of dermatological lesions

    NASA Astrophysics Data System (ADS)

    Sáez, Aurora; Acha, Begoña; Serrano, Carmen

    2010-03-01

    Certain skin diseases are chronic, inflammatory and without cure. However, there are many treatment options that can clear them for a period of time. Measuring their severity and assessing their extent, is a fundamental issue to determine the efficacy of the treatment under test. Two of the most important parameters of severity assessment are Erythema (redness) and Scaliness. Physicians classify these parameters into several grades by visual grading method. In this paper a color image segmentation and classification algorithm is developed to obtain an assessment of erythema and scaliness of dermatological lesions. Color digital photographs taken under an acquisition protocol form the database. Difference between green band and blue band of images in RGB color space shows two modes (healthy skin and lesion) with clear separation. Otsu's method is applied to this difference in order to isolate the lesion. After the skin disease is segmented, some color and texture features are calculated and they are the inputs to a Fuzzy-ARTMAP neural network. The neural network classifies them into the five grades of erythema and the five grades of scaliness. The method has been tested with 31 images with a success percentage of 83.87 % when the images are classified in erythema, and 77.42 % for scaliness classification.

  11. A multi-temporal fusion-based approach for land cover mapping in support of nuclear incident response

    NASA Astrophysics Data System (ADS)

    Sah, Shagan

    An increasingly important application of remote sensing is to provide decision support during emergency response and disaster management efforts. Land cover maps constitute one such useful application product during disaster events; if generated rapidly after any disaster, such map products can contribute to the efficacy of the response effort. In light of recent nuclear incidents, e.g., after the earthquake/tsunami in Japan (2011), our research focuses on constructing rapid and accurate land cover maps of the impacted area in case of an accidental nuclear release. The methodology involves integration of results from two different approaches, namely coarse spatial resolution multi-temporal and fine spatial resolution imagery, to increase classification accuracy. Although advanced methods have been developed for classification using high spatial or temporal resolution imagery, only a limited amount of work has been done on fusion of these two remote sensing approaches. The presented methodology thus involves integration of classification results from two different remote sensing modalities in order to improve classification accuracy. The data used included RapidEye and MODIS scenes over the Nine Mile Point Nuclear Power Station in Oswego (New York, USA). The first step in the process was the construction of land cover maps from freely available, high temporal resolution, low spatial resolution MODIS imagery using a time-series approach. We used the variability in the temporal signatures among different land cover classes for classification. The time series-specific features were defined by various physical properties of a pixel, such as variation in vegetation cover and water content over time. The pixels were classified into four land cover classes - forest, urban, water, and vegetation - using Euclidean and Mahalanobis distance metrics. On the other hand, a high spatial resolution commercial satellite, such as RapidEye, can be tasked to capture images over the affected area in the case of a nuclear event. This imagery served as a second source of data to augment results from the time series approach. The classifications from the two approaches were integrated using an a posteriori probability-based fusion approach. This was done by establishing a relationship between the classes, obtained after classification of the two data sources. Despite the coarse spatial resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion-based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. This fusion thus contributed to classification accuracy refinement, with a few additional advantages, such as correction for cloud cover and providing for an approach that is robust against point-in-time seasonal anomalies, due to the inclusion of multi-temporal data. We concluded that this approach is capable of generating land cover maps of acceptable accuracy and rapid turnaround, which in turn can yield reliable estimates of crop acreage of a region. The final algorithm is part of an automated software tool, which can be used by emergency response personnel to generate a nuclear ingestion pathway information product within a few hours of data collection.

  12. Automatic image analysis and spot classification for detection of pathogenic Escherichia coli on glass slide DNA microarrays

    USDA-ARS?s Scientific Manuscript database

    A computer algorithm was created to inspect scanned images from DNA microarray slides developed to rapidly detect and genotype E. Coli O157 virulent strains. The algorithm computes centroid locations for signal and background pixels in RGB space and defines a plane perpendicular to the line connect...

  13. Improved discrimination among similar agricultural plots using red-and-green-based pseudo-colour imaging

    NASA Astrophysics Data System (ADS)

    Doi, Ryoichi

    2016-04-01

    The effects of a pseudo-colour imaging method were investigated by discriminating among similar agricultural plots in remote sensing images acquired using the Airborne Visible/Infrared Imaging Spectrometer (Indiana, USA) and the Landsat 7 satellite (Fergana, Uzbekistan), and that provided by GoogleEarth (Toyama, Japan). From each dataset, red (R)-green (G)-R-G-blue yellow (RGrgbyB), and RGrgby-1B pseudo-colour images were prepared. From each, cyan, magenta, yellow, key black, L*, a*, and b* derivative grayscale images were generated. In the Airborne Visible/Infrared Imaging Spectrometer image, pixels were selected for corn no tillage (29 pixels), corn minimum tillage (27), and soybean (34) plots. Likewise, in the Landsat 7 image, pixels representing corn (73 pixels), cotton (110), and wheat (112) plots were selected, and in the GoogleEarth image, those representing soybean (118 pixels) and rice (151) were selected. When the 14 derivative grayscale images were used together with an RGB yellow grayscale image, the overall classification accuracy improved from 74 to 94% (Airborne Visible/Infrared Imaging Spectrometer), 64 to 83% (Landsat), or 77 to 90% (GoogleEarth). As an indicator of discriminatory power, the kappa significance improved 1018-fold (Airborne Visible/Infrared Imaging Spectrometer) or greater. The derivative grayscale images were found to increase the dimensionality and quantity of data. Herein, the details of the increases in dimensionality and quantity are further analysed and discussed.

  14. [Hand eczema. The clinical classification of the roles of exogenous and endogenous factors in each type].

    PubMed

    Tamiya, Y

    1994-08-01

    Hand eczema is one of the most common dermatological disorders. Although it is a general term referring to eczematous dermatitis of the hands, it actually covers a wide range of diseases. The classification of hand eczema is controversial even now, as definitions of individual diseases have not yet been established. It is well-known that exogenous factors, such as chemicals or water, are associated with the occurrence of hand eczema. In this study, we focused on endogenous factors, especially personal or family history of atopy as a causative factor in hand eczema. According to exogenous and endogenous factors, we classified hand eczema into three types: atopic dermatitis, contact dermatitis and dysidrosis. This classification is useful because it makes the definition of each disease clear. Skin-humidity and sebum measurement are simple and rapid methods of determining personal atopy, skin condition and the effect of treatment on hand eczema patients.

  15. Documentation and Detection of Colour Changes of Bas Relieves Using Close Range Photogrammetry

    NASA Astrophysics Data System (ADS)

    Malinverni, E. S.; Pierdicca, R.; Sturari, M.; Colosi, F.; Orazi, R.

    2017-05-01

    The digitization of complex buildings, findings or bas relieves can strongly facilitate the work of archaeologists, mainly for in depth analysis tasks. Notwithstanding, whether new visualization techniques ease the study phase, a classical naked-eye approach for determining changes or surface alteration could bring towards several drawbacks. The research work described in these pages is aimed at providing experts with a workflow for the evaluation of alterations (e.g. color decay or surface alterations), allowing a more rapid and objective monitoring of monuments. More in deep, a pipeline of work has been tested in order to evaluate the color variation between surfaces acquired at different époques. The introduction of reliable tools of change detection in the archaeological domain is needful; in fact, the most widespread practice, among archaeologists and practitioners, is to perform a traditional monitoring of surfaces that is made of three main steps: production of a hand-made map based on a subjective analysis, selection of a sub-set of regions of interest, removal of small portion of surface for in depth analysis conducted in laboratory. To overcome this risky and time consuming process, digital automatic change detection procedure represents a turning point. To do so, automatic classification has been carried out according to two approaches: a pixel-based and an object-based method. Pixel-based classification aims to identify the classes by means of the spectral information provided by each pixel belonging to the original bands. The object-based approach operates on sets of pixels (objects/regions) grouped together by means of an image segmentation technique. The methodology was tested by studying the bas-relieves of a temple located in Peru, named Huaca de la Luna. Despite the data sources were collected with unplanned surveys, the workflow proved to be a valuable solution useful to understand which are the main changes over time.

  16. Bayesian Network Structure Learning for Urban Land Use Classification from Landsat ETM+ and Ancillary Data

    NASA Astrophysics Data System (ADS)

    Park, M.; Stenstrom, M. K.

    2004-12-01

    Recognizing urban information from the satellite imagery is problematic due to the diverse features and dynamic changes of urban landuse. The use of Landsat imagery for urban land use classification involves inherent uncertainty due to its spatial resolution and the low separability among land uses. To resolve the uncertainty problem, we investigated the performance of Bayesian networks to classify urban land use since Bayesian networks provide a quantitative way of handling uncertainty and have been successfully used in many areas. In this study, we developed the optimized networks for urban land use classification from Landsat ETM+ images of Marina del Rey area based on USGS land cover/use classification level III. The networks started from a tree structure based on mutual information between variables and added the links to improve accuracy. This methodology offers several advantages: (1) The network structure shows the dependency relationships between variables. The class node value can be predicted even with particular band information missing due to sensor system error. The missing information can be inferred from other dependent bands. (2) The network structure provides information of variables that are important for the classification, which is not available from conventional classification methods such as neural networks and maximum likelihood classification. In our case, for example, bands 1, 5 and 6 are the most important inputs in determining the land use of each pixel. (3) The networks can be reduced with those input variables important for classification. This minimizes the problem without considering all possible variables. We also examined the effect of incorporating ancillary data: geospatial information such as X and Y coordinate values of each pixel and DEM data, and vegetation indices such as NDVI and Tasseled Cap transformation. The results showed that the locational information improved overall accuracy (81%) and kappa coefficient (76%), and lowered the omission and commission errors compared with using only spectral data (accuracy 71%, kappa coefficient 62%). Incorporating DEM data did not significantly improve overall accuracy (74%) and kappa coefficient (66%) but lowered the omission and commission errors. Incorporating NDVI did not much improve the overall accuracy (72%) and k coefficient (65%). Including Tasseled Cap transformation reduced the accuracy (accuracy 70%, kappa 61%). Therefore, additional information from the DEM and vegetation indices was not useful as locational ancillary data.

  17. Analysis of the changes in the tarcrete layer on the desert surface of Kuwait using satellite imagery and cell-based modeling

    NASA Astrophysics Data System (ADS)

    Al-Doasari, Ahmad E.

    The 1991 Gulf War caused massive environmental damage in Kuwait. Deposition of oil and soot droplets from hundreds of burning oil-wells created a layer of tarcrete on the desert surface covering over 900 km2. This research investigates the spatial change in the tarcrete extent from 1991 to 1998 using Landsat Thematic Mapper (TM) imagery and statistical modeling techniques. The pixel structure of TM data allows the spatial analysis of the change in tarcrete extent to be conducted at the pixel (cell) level within a geographical information system (GIS). There are two components to this research. The first is a comparison of three remote sensing classification techniques used to map the tarcrete layer. The second is a spatial-temporal analysis and simulation of tarcrete changes through time. The analysis focuses on an area of 389 km2 located south of the Al-Burgan oil field. Five TM images acquired in 1991, 1993, 1994, 1995, and 1998 were geometrically and atmospherically corrected. These images were classified into six classes: oil lakes; heavy, intermediate, light, and traces of tarcrete; and sand. The classification methods tested were unsupervised, supervised, and neural network supervised (fuzzy ARTMAP). Field data of tarcrete characteristics were collected to support the classification process and to evaluate the classification accuracies. Overall, the neural network method is more accurate (60 percent) than the other two methods; both the unsupervised and the supervised classification accuracy assessments resulted in 46 percent accuracy. The five classifications were used in a lagged autologistic model to analyze the spatial changes of the tarcrete through time. The autologistic model correctly identified overall tarcrete contraction between 1991--1993 and 1995--1998. However, tarcrete contraction between 1993--1994 and 1994--1995 was less well marked, in part because of classification errors in the maps from these time periods. Initial simulations of tarcrete contraction with a cellular automaton model were not very successful. However, more accurate classifications could improve the simulations. This study illustrates how an empirical investigation using satellite images, field data, GIS, and spatial statistics can simulate dynamic land-cover change through the use of a discrete statistical and cellular automaton model.

  18. Melanoma Is Skin Deep: A 3D Reconstruction Technique for Computerized Dermoscopic Skin Lesion Classification

    PubMed Central

    Satheesha, T. Y.; Prasad, M. N. Giri; Dhruve, Kashyap D.

    2017-01-01

    Melanoma mortality rates are the highest amongst skin cancer patients. Melanoma is life threating when it grows beyond the dermis of the skin. Hence, depth is an important factor to diagnose melanoma. This paper introduces a non-invasive computerized dermoscopy system that considers the estimated depth of skin lesions for diagnosis. A 3-D skin lesion reconstruction technique using the estimated depth obtained from regular dermoscopic images is presented. On basis of the 3-D reconstruction, depth and 3-D shape features are extracted. In addition to 3-D features, regular color, texture, and 2-D shape features are also extracted. Feature extraction is critical to achieve accurate results. Apart from melanoma, in-situ melanoma the proposed system is designed to diagnose basal cell carcinoma, blue nevus, dermatofibroma, haemangioma, seborrhoeic keratosis, and normal mole lesions. For experimental evaluations, the PH2, ISIC: Melanoma Project, and ATLAS dermoscopy data sets is considered. Different feature set combinations is considered and performance is evaluated. Significant performance improvement is reported the post inclusion of estimated depth and 3-D features. The good classification scores of sensitivity = 96%, specificity = 97% on PH2 data set and sensitivity = 98%, specificity = 99% on the ATLAS data set is achieved. Experiments conducted to estimate tumor depth from 3-D lesion reconstruction is presented. Experimental results achieved prove that the proposed computerized dermoscopy system is efficient and can be used to diagnose varied skin lesion dermoscopy images. PMID:28512610

  19. [Actinic keratosis, Bowen's disease, keratoacanthoma and squamous cell carcinoma of the skin].

    PubMed

    Majores, M; Bierhoff, E

    2015-02-01

    Actinic (solar) keratosis is an intraepidermal squamous neoplasm of sun-damaged skin and by far the most frequent neoplastic skin lesion. A subdivison into three grades has been proposed with increasing acceptance not least because of the therapeutic consequences. The transition to invasive squamous cell carcinoma is reported in 5-10 % and with immunosuppression in 30 % of patients.Bowen's disease is a variant of squamous cell carcinoma in situ of the skin and the mucocutaneous junction. The differentiation from bowenoid papulosis as a lesion associated with human papillomavirus (HPV), actinic (solar) keratosis grade III, intraepidermal poroid lesions and in cases of clonal type from clonal seborrhoic keratosis and Paget's disease is very important.Keratoacanthoma is currently uniformly interpreted as a variant of highly differentiated squamous cell carcinoma of the skin with clinical and histomorphological characteristics. Clinically keratoacanthoma erupts rapidly and is capable of resolving spontaneously. Histologically, there is a characteristic growth pattern and various stages of regression. The final histomorphological diagnosis needs the entire specimen.Squamous cell carcinoma of the skin is the second most common type of skin cancer following basal cell carcinoma. With respect to reccurrencies and risk of metastases the subtyping of cutaneous squamous cell carcinoma is very important. The classification system of the Union Internationale Contra le Cancer (UICC) is based solely on the anatomical spread and the classification system of the American Joint Committee on Cancer (AJCC) also considers so-called high-risk features in the staging between stages I and II.

  20. Resampling approach for anomalous change detection

    NASA Astrophysics Data System (ADS)

    Theiler, James; Perkins, Simon

    2007-04-01

    We investigate the problem of identifying pixels in pairs of co-registered images that correspond to real changes on the ground. Changes that are due to environmental differences (illumination, atmospheric distortion, etc.) or sensor differences (focus, contrast, etc.) will be widespread throughout the image, and the aim is to avoid these changes in favor of changes that occur in only one or a few pixels. Formal outlier detection schemes (such as the one-class support vector machine) can identify rare occurrences, but will be confounded by pixels that are "equally rare" in both images: they may be anomalous, but they are not changes. We describe a resampling scheme we have developed that formally addresses both of these issues, and reduces the problem to a binary classification, a problem for which a large variety of machine learning tools have been developed. In principle, the effects of misregistration will manifest themselves as pervasive changes, and our method will be robust against them - but in practice, misregistration remains a serious issue.

  1. Fluorescence imaging to quantify crop residue cover

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

    1994-01-01

    Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

  2. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diametermore » at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter distances from the patient skin.« less

  3. Identification of cortex in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    VanMeter, John W.; Sandon, Peter A.

    1992-06-01

    The overall goal of the work described here is to make available to the neurosurgeon in the operating room an on-line, three-dimensional, anatomically labeled model of the patient brain, based on pre-operative magnetic resonance (MR) images. A stereotactic operating microscope is currently in experimental use, which allows structures that have been manually identified in MR images to be made available on-line. We have been working to enhance this system by combining image processing techniques applied to the MR data with an anatomically labeled 3-D brain model developed from the Talairach and Tournoux atlas. Here we describe the process of identifying cerebral cortex in the patient MR images. MR images of brain tissue are reasonably well described by material mixture models, which identify each pixel as corresponding to one of a small number of materials, or as being a composite of two materials. Our classification algorithm consists of three steps. First, we apply hierarchical, adaptive grayscale adjustments to correct for nonlinearities in the MR sensor. The goal of this preprocessing step, based on the material mixture model, is to make the grayscale distribution of each tissue type constant across the entire image. Next, we perform an initial classification of all tissue types according to gray level. We have used a sum of Gaussian's approximation of the histogram to perform this classification. Finally, we identify pixels corresponding to cortex, by taking into account the spatial patterns characteristic of this tissue. For this purpose, we use a set of matched filters to identify image locations having the appropriate configuration of gray matter (cortex), cerebrospinal fluid and white matter, as determined by the previous classification step.

  4. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  5. Quantifying the Availability of Tidewater Glacial Ice as Habitat for Harbor Seals in a Tidewater Glacial Fjord in Alaska Using Object-Based Image Analysis of Airborne Visible Imagery

    NASA Astrophysics Data System (ADS)

    Prakash, A.; Haselwimmer, C. E.; Gens, R.; Womble, J. N.; Ver Hoef, J.

    2013-12-01

    Tidewater glaciers are prominent landscape features that play a significant role in landscape and ecosystem processes along the southeastern and southcentral coasts of Alaska. Tidewater glaciers calve large icebergs that serve as an important substrate for harbor seals (Phoca vitulina richardii) for resting, pupping, nursing young, molting, and avoiding predators. Many of the tidewater glaciers in Alaska are retreating, which may influence harbor seal populations. Our objectives are to investigate the relationship between ice conditions and harbor seal distributions, which are poorly understood, in John's Hopkins Inlet, Glacier Bay National Park, Alaska, using a combination of airborne remote sensing and statistical modeling techniques. We present an overview of some results from Object-Based Image Analysis (OBIA) for classification of a time series of very high spatial resolution (4 cm pixels) airborne imagery acquired over John's Hopkins Inlet during the harbor seal pupping season in June and during the molting season in August from 2007 - 2012. Using OBIA we have developed a workflow to automate processing of the large volumes (~1250 images/survey) of airborne visible imagery for 1) classification of ice products (e.g. percent ice cover, percent brash ice, percent ice bergs) at a range of scales, and 2) quantitative determination of ice morphological properties such as iceberg size, roundness, and texture that are not found in traditional per-pixel classification approaches. These ice classifications and morphological variables are then used in statistical models to assess relationships with harbor seal abundance and distribution. Ultimately, understanding these relationships may provide novel perspectives on the spatial and temporal variation of harbor seals in tidewater glacial fjords.

  6. CON4EI: SkinEthic™ Human Corneal Epithelium Eye Irritation Test (SkinEthic™ HCE EIT) for hazard identification and labelling of eye irritating chemicals.

    PubMed

    Van Rompay, A R; Alépée, N; Nardelli, L; Hollanders, K; Leblanc, V; Drzewiecka, A; Gruszka, K; Guest, R; Kandarova, H; Willoughby, J A; Verstraelen, S; Adriaens, E

    2018-06-01

    Assessment of ocular irritancy is an international regulatory requirement and a necessary step in the safety evaluation of industrial and consumer products. Although a number of in vitro ocular irritation assays exist, none are capable of fully categorizing chemicals as a stand-alone assay. Therefore, the CEFIC-LRI-AIMT6-VITO CON4EI (CONsortium for in vitro Eye Irritation testing strategy) project was developed with the goal of assessing the reliability of eight in vitro/alternative test methods as well as establishing an optimal tiered-testing strategy. One of the in vitro assays selected was the validated SkinEthic™ Human Corneal Epithelium Eye Irritation Test method (SkinEthic™ HCE EIT). The SkinEthic™ HCE EIT has already demonstrated its capacity to correctly identify chemicals (both substances and mixtures) not requiring classification and labelling for eye irritation or serious eye damage (No Category). The goal of this study was to evaluate the performance of the SkinEthic™ HCE EIT test method in terms of the important in vivo drivers of classification. For the performance with respect to the drivers all in vivo Cat 1 and No Cat chemicals were 100% correctly identified. For Cat 2 chemicals the liquids and the solids had a sensitivity of 100% and 85.7%, respectively. For the SkinEthic™ HCE EIT test method, 100% concordance in predictions (No Cat versus No prediction can be made) between the two participating laboratories was obtained. The accuracy of the SkinEthic™ HCE EIT was 97.5% with 100% sensitivity and 96.9% specificity. The SkinEthic™ HCE EIT confirms its excellent results of the validation studies. Copyright © 2017. Published by Elsevier Ltd.

  7. A retrospective analysis of in vivo eye irritation, skin irritation and skin sensitisation studies with agrochemical formulations: Setting the scene for development of alternative strategies.

    PubMed

    Corvaro, M; Gehen, S; Andrews, K; Chatfield, R; Macleod, F; Mehta, J

    2017-10-01

    Analysis of the prevalence of health effects in large scale databases is key in defining testing strategies within the context of Integrated Approaches on Testing and Assessment (IATA), and is relevant to drive policy changes in existing regulatory toxicology frameworks towards non-animal approaches. A retrospective analysis of existing results from in vivo skin irritation, eye irritation, and skin sensitisation studies on a database of 223 agrochemical formulations is herein published. For skin or eye effects, high prevalence of mild to non-irritant formulations (i.e. per GHS, CLP or EPA classification) would generally suggest a bottom-up approach. Severity of erythema or corneal opacity, for skinor eye effects respectively, were the key drivers for classification, consistent with existing literature. The reciprocal predictivity of skin versus eye irritation and the good negative predictivity of the GHS additivity calculation approach (>85%) provided valuable non-testing evidence for irritation endpoints. For dermal sensitisation, concordance on data from three different methods confirmed the high false negative rate for the Buehler method in this product class. These results have been reviewed together with existing literature on the use of in vitro alternatives for agrochemical formulations, to propose improvements to current regulatory strategies and to identify further research needs. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Goldberg, Hirsh; Nasrabadi, Nasser M.

    2007-04-01

    In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.

  9. Angular difference feature extraction for urban scene classification using ZY-3 multi-angle high-resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Chen, Huijun; Gong, Jianya

    2018-01-01

    Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed ADF features can effectively improve the accuracy of urban scene classification, with a significant increase in overall accuracy (3.8-11.7%) compared to using the spectral bands alone. Furthermore, the results indicated the superiority of the proposed ADFs in distinguishing between the spectrally similar and complex man-made classes, including roads and various types of buildings (e.g., high buildings, urban villages, and residential apartments).

  10. Mapping burn severity, pine beetle infestation, and their interaction at the High Park Fire

    NASA Astrophysics Data System (ADS)

    Stone, Brandon

    North America's western forests are experiencing wildfire and mountain pine beetle (MPB) disturbances that are unprecedented in the historic record, but it remains unclear whether and how MPB infestation influences post-infestation fire behavior. The 2012 High Park Fire burned in an area that's estimated to have begun a MPB outbreak cycle within five years before the wildfire, resulting in a landscape in which disturbance interactions can be studied. A first step in studying these interactions is mapping regions of beetle infestation and post-fire disturbance. We implemented an approach for mapping beetle infestation and burn severity using as source data three 5 m resolution RapidEye satellite images (two pre-fire, one post-fire). A two-tiered methodology was developed to overcome the spatial limitations of many classification approaches through explicit analyses at both pixel and plot level. Major land cover classes were photo-interpreted at the plot-level and their spectral signature used to classify 5 m images. A new image was generated at 25 m resolution by tabulating the fraction of coincident 5 m pixels in each cover class. The original photo interpretation was then used to train a second classification using as its source image the new 25 m image. Maps were validated using k-fold analysis of the original photo interpretation, field data collected immediately post-fire, and publicly available classifications. To investigate the influence of pre-fire beetle infestation on burn severity within the High Park Fire, we fit a log-linear model of conditional independence to our thematic maps after controlling for forest cover class and slope aspect. Our analysis revealed a high co-occurrence of severe burning and beetle infestation within high elevation lodgepole pine stands, but did not find statistically significant evidence that infected stands were more likely to burn severely than similar uninfected stands. Through an inspection of the year-to-year changes in the class fraction signatures of pixels classified as MPB infestation, we were able to observe increases in infection extent and intensity in the year before the fire. The resulting maps will help to increase our understanding of the process that contributed to the High Park Fire, and we believe that the novel classification approach will allow for improved characterization of forest disturbances.

  11. WND-CHARM: Multi-purpose image classification using compound image transforms

    PubMed Central

    Orlov, Nikita; Shamir, Lior; Macura, Tomasz; Johnston, Josiah; Eckley, D. Mark; Goldberg, Ilya G.

    2008-01-01

    We describe a multi-purpose image classifier that can be applied to a wide variety of image classification tasks without modifications or fine-tuning, and yet provide classification accuracy comparable to state-of-the-art task-specific image classifiers. The proposed image classifier first extracts a large set of 1025 image features including polynomial decompositions, high contrast features, pixel statistics, and textures. These features are computed on the raw image, transforms of the image, and transforms of transforms of the image. The feature values are then used to classify test images into a set of pre-defined image classes. This classifier was tested on several different problems including biological image classification and face recognition. Although we cannot make a claim of universality, our experimental results show that this classifier performs as well or better than classifiers developed specifically for these image classification tasks. Our classifier’s high performance on a variety of classification problems is attributed to (i) a large set of features extracted from images; and (ii) an effective feature selection and weighting algorithm sensitive to specific image classification problems. The algorithms are available for free download from openmicroscopy.org. PMID:18958301

  12. Real-time, resource-constrained object classification on a micro-air vehicle

    NASA Astrophysics Data System (ADS)

    Buck, Louis; Ray, Laura

    2013-12-01

    A real-time embedded object classification algorithm is developed through the novel combination of binary feature descriptors, a bag-of-visual-words object model and the cortico-striatal loop (CSL) learning algorithm. The BRIEF, ORB and FREAK binary descriptors are tested and compared to SIFT descriptors with regard to their respective classification accuracies, execution times, and memory requirements when used with CSL on a 12.6 g ARM Cortex embedded processor running at 800 MHz. Additionally, the effect of x2 feature mapping and opponent-color representations used with these descriptors is examined. These tests are performed on four data sets of varying sizes and difficulty, and the BRIEF descriptor is found to yield the best combination of speed and classification accuracy. Its use with CSL achieves accuracies between 67% and 95% of those achieved with SIFT descriptors and allows for the embedded classification of a 128x192 pixel image in 0.15 seconds, 60 times faster than classification with SIFT. X2 mapping is found to provide substantial improvements in classification accuracy for all of the descriptors at little cost, while opponent-color descriptors are offer accuracy improvements only on colorful datasets.

  13. Reducing uncertainty on satellite image classification through spatiotemporal reasoning

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Nikolakaki, Natassa; Psillakis, Periklis; Miliaresis, George; Xanthakis, Michail

    2014-05-01

    The natural habitat constantly endures both inherent natural and human-induced influences. Remote sensing has been providing monitoring oriented solutions regarding the natural Earth surface, by offering a series of tools and methodologies which contribute to prudent environmental management. Processing and analysis of multi-temporal satellite images for the observation of the land changes include often classification and change-detection techniques. These error prone procedures are influenced mainly by the distinctive characteristics of the study areas, the remote sensing systems limitations and the image analysis processes. The present study takes advantage of the temporal continuity of multi-temporal classified images, in order to reduce classification uncertainty, based on reasoning rules. More specifically, pixel groups that temporally oscillate between classes are liable to misclassification or indicate problematic areas. On the other hand, constant pixel group growth indicates a pressure prone area. Computational tools are developed in order to disclose the alterations in land use dynamics and offer a spatial reference to the pressures that land use classes endure and impose between them. Moreover, by revealing areas that are susceptible to misclassification, we propose specific target site selection for training during the process of supervised classification. The underlying objective is to contribute to the understanding and analysis of anthropogenic and environmental factors that influence land use changes. The developed algorithms have been tested upon Landsat satellite image time series, depicting the National Park of Ainos in Kefallinia, Greece, where the unique in the world Abies cephalonica grows. Along with the minor changes and pressures indicated in the test area due to harvesting and other human interventions, the developed algorithms successfully captured fire incidents that have been historically confirmed. Overall, the results have shown that the use of the suggested procedures can contribute to the reduction of the classification uncertainty and support the existing knowledge regarding the pressure among land-use changes.

  14. Detection of Coastline Deformation Using Remote Sensing and Geodetic Surveys

    NASA Astrophysics Data System (ADS)

    Sabuncu, A.; Dogru, A.; Ozener, H.; Turgut, B.

    2016-06-01

    The coastal areas are being destroyed due to the usage that effect the natural balance. Unconsciously sand mining from the sea for nearshore nourishment and construction uses are the main ones. Physical interferences for mining of sand cause an ecologic threat to the coastal environment. However, use of marine sand is inevitable because of economic reasons or unobtainable land-based sand resources. The most convenient solution in such a protection-usage dilemma is to reduce negative impacts of sand production from marine. This depends on the accurate determination of criteriaon production place, style, and amount of sand. With this motivation, nearshore geodedic surveying studies performed on Kilyos Campus of Bogazici University located on the Black Sea coast, north of Istanbul, Turkey between 2001-2002. The study area extends 1 km in the longshore. Geodetic survey was carried out in the summer of 2001 to detect the initial condition for the shoreline. Long-term seasonal changes in shoreline positions were determined biannually. The coast was measured with post-processed kinematic GPS. Besides, shoreline change has studied using Landsat imagery between the years 1986-2015. The data set of Landsat 5 imageries were dated 05.08.1986 and 31.08.2007 and Landsat 7 imageries were dated 21.07.2001 and 28.07.2015. Landcover types in the study area were analyzed on the basis of pixel based classification method. Firstly, unsupervised classification based on ISODATA (Iterative Self Organizing Data Analysis Technique) has been applied and spectral clusters have been determined that gives prior knowledge about the study area. In the second step, supervised classification was carried out by using the three different approaches which are minimum-distance, parallelepiped and maximum-likelihood. All pixel based classification processes were performed with ENVI 4.8 image processing software. Results of geodetic studies and classification outputs will be presented in this paper.

  15. Comparison of Munsell(®) color chart assessments with primary schoolchildren's self-reported skin color.

    PubMed

    Wright, C Y; Reeder, A I; Gray, A R; Hammond, V A

    2015-11-01

    Skin color is related to human health outcomes, including the risks of skin cancer and vitamin D insufficiency. Self-perceptions of skin color may influence health behaviours, including the adoption of practices protective against harmful solar ultraviolet radiation levels. Misperception of personal risk may have negative health implications. The aim of this study is to determine whether Munsell(®) color chart assessments align with child self-reported skin color. Two-trained investigators, with assessed color acuity, visually classified student inner upper arm constitutive skin color. The Munsell(®) classifications obtained were converted to Individual Typology Angle (ITA) values and respective Del Bino skin color categories after spectrocolorimeter measurements based on published values/data. As part of a written questionnaire on sun protection knowledge, attitudes, and behaviours, self-completed in class time, students classified their end of winter skin color. Student self-reports were compared with the ITA-based Del Bino classifications. A total of 477 New Zealand primary students attending 27 randomly selected schools from five geographic regions. The main measures were self-reported skin color and visually observed skin color. A monotonic association was observed between the distribution of spectrophotometer ITA scores obtained for Munsell(®) tiles and child self-reports of skin color, providing some evidence for the validity of self-report among New Zealand primary school children, although the lighter colored ITA defined groups were most numerous in this study sample. Statistically significant differences in ITA scores were found by ethnicity, self-reported skin color, and geographic residence (P < 0.001). Certain Munsell(®) color tiles were frequently selected as providing a best match to skin color. Assessment using Munsell(®) color charts was simple, inexpensive, and practical for field use and acceptable to children. The results suggest that this method may prove useful for making comparisons with other studies using visual tools to assess skin color. Alignment between the ITA distribution derived from the Munsell(®) assessment and child skin color self-reports could probably be improved, particularly with the addition of another 'light'/'white' color category in the self-report instrument. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Comparative study of wine tannin classification using Fourier transform mid-infrared spectrometry and sensory analysis.

    PubMed

    Fernández, Katherina; Labarca, Ximena; Bordeu, Edmundo; Guesalaga, Andrés; Agosin, Eduardo

    2007-11-01

    Wine tannins are fundamental to the determination of wine quality. However, the chemical and sensorial analysis of these compounds is not straightforward and a simple and rapid technique is necessary. We analyzed the mid-infrared spectra of white, red, and model wines spiked with known amounts of skin or seed tannins, collected using Fourier transform mid-infrared (FT-MIR) transmission spectroscopy (400-4000 cm(-1)). The spectral data were classified according to their tannin source, skin or seed, and tannin concentration by means of discriminant analysis (DA) and soft independent modeling of class analogy (SIMCA) to obtain a probabilistic classification. Wines were also classified sensorially by a trained panel and compared with FT-MIR. SIMCA models gave the most accurate classification (over 97%) and prediction (over 60%) among the wine samples. The prediction was increased (over 73%) using the leave-one-out cross-validation technique. Sensory classification of the wines was less accurate than that obtained with FT-MIR and SIMCA. Overall, these results show the potential of FT-MIR spectroscopy, in combination with adequate statistical tools, to discriminate wines with different tannin levels.

  17. Accounting for data variability, a key factor in in vivo/in vitro relationships: application to the skin sensitization potency (in vivo LLNA versus in vitro DPRA) example.

    PubMed

    Dimitrov, S; Detroyer, A; Piroird, C; Gomes, C; Eilstein, J; Pauloin, T; Kuseva, C; Ivanova, H; Popova, I; Karakolev, Y; Ringeissen, S; Mekenyan, O

    2016-12-01

    When searching for alternative methods to animal testing, confidently rescaling an in vitro result to the corresponding in vivo classification is still a challenging problem. Although one of the most important factors affecting good correlation is sample characteristics, they are very rarely integrated into correlation studies. Usually, in these studies, it is implicitly assumed that both compared values are error-free numbers, which they are not. In this work, we propose a general methodology to analyze and integrate data variability and thus confidence estimation when rescaling from one test to another. The methodology is demonstrated through the case study of rescaling the in vitro Direct Peptide Reactivity Assay (DPRA) reactivity to the in vivo Local Lymph Node Assay (LLNA) skin sensitization potency classifications. In a first step, a comprehensive statistical analysis evaluating the reliability and variability of LLNA and DPRA as such was done. These results allowed us to link the concept of gray zones and confidence probability, which in turn represents a new perspective for a more precise knowledge of the classification of chemicals within their in vivo OR in vitro test. Next, the novelty and practical value of our methodology introducing variability into the threshold optimization between the in vitro AND in vivo test resides in the fact that it attributes a confidence probability to the predicted classification. The methodology, classification and screening approach presented in this study are not restricted to skin sensitization only. They could be helpful also for fate, toxicity and health hazard assessment where plenty of in vitro and in chemico assays and/or QSARs models are available. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Resonance Raman of BCC and normal skin

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-hui; Sriramoju, Vidyasagar; Boydston-White, Susie; Wu, Binlin; Zhang, Chunyuan; Pei, Zhe; Sordillo, Laura; Beckman, Hugh; Alfano, Robert R.

    2017-02-01

    The Resonance Raman (RR) spectra of basal cell carcinoma (BCC) and normal human skin tissues were analyzed using 532nm laser excitation. RR spectral differences in vibrational fingerprints revealed skin normal and cancerous states tissues. The standard diagnosis criterion for BCC tissues are created by native RR biomarkers and its changes at peak intensity. The diagnostic algorithms for the classification of BCC and normal were generated based on SVM classifier and PCA statistical method. These statistical methods were used to analyze the RR spectral data collected from skin tissues, yielding a diagnostic sensitivity of 98.7% and specificity of 79% compared with pathological reports.

  19. Dynamic Skin Patterns in Cephalopods

    PubMed Central

    How, Martin J.; Norman, Mark D.; Finn, Julian; Chung, Wen-Sung; Marshall, N. Justin

    2017-01-01

    Cephalopods are unrivaled in the natural world in their ability to alter their visual appearance. These mollusks have evolved a complex system of dermal units under neural, hormonal, and muscular control to produce an astonishing variety of body patterns. With parallels to the pixels on a television screen, cephalopod chromatophores can be coordinated to produce dramatic, dynamic, and rhythmic displays, defined collectively here as “dynamic patterns.” This study examines the nature, context, and potential functions of dynamic patterns across diverse cephalopod taxa. Examples are presented for 21 species, including 11 previously unreported in the scientific literature. These range from simple flashing or flickering patterns, to highly complex passing wave patterns involving multiple skin fields. PMID:28674500

  20. Numerical trials of HISSE

    NASA Technical Reports Server (NTRS)

    Peters, C.; Kampe, F. (Principal Investigator)

    1980-01-01

    The mathematical description and implementation of the statistical estimation procedure known as the Houston integrated spatial/spectral estimator (HISSE) is discussed. HISSE is based on a normal mixture model and is designed to take advantage of spectral and spatial information of LANDSAT data pixels, utilizing the initial classification and clustering information provided by the AMOEBA algorithm. The HISSE calculates parametric estimates of class proportions which reduce the error inherent in estimates derived from typical classify and count procedures common to nonparametric clustering algorithms. It also singles out spatial groupings of pixels which are most suitable for labeling classes. These calculations are designed to aid the analyst/interpreter in labeling patches with a crop class label. Finally, HISSE's initial performance on an actual LANDSAT agricultural ground truth data set is reported.

  1. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  2. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  3. Fast microcalcification detection in ultrasound images using image enhancement and threshold adjacency statistics

    NASA Astrophysics Data System (ADS)

    Cho, Baek Hwan; Chang, Chuho; Lee, Jong-Ha; Ko, Eun Young; Seong, Yeong Kyeong; Woo, Kyoung-Gu

    2013-02-01

    The existence of microcalcifications (MCs) is an important marker of malignancy in breast cancer. In spite of the benefits in mass detection for dense breasts, ultrasonography is believed that it might not reliably detect MCs. For computer aided diagnosis systems, however, accurate detection of MCs has the possibility of improving the performance in both Breast Imaging-Reporting and Data System (BI-RADS) lexicon description for calcifications and malignancy classification. We propose a new efficient and effective method for MC detection using image enhancement and threshold adjacency statistics (TAS). The main idea of TAS is to threshold an image and to count the number of white pixels with a given number of adjacent white pixels. Our contribution is to adopt TAS features and apply image enhancement to facilitate MC detection in ultrasound images. We employed fuzzy logic, tophat filter, and texture filter to enhance images for MCs. Using a total of 591 images, the classification accuracy of the proposed method in MC detection showed 82.75%, which is comparable to that of Haralick texture features (81.38%). When combined, the performance was as high as 85.11%. In addition, our method also showed the ability in mass classification when combined with existing features. In conclusion, the proposed method exploiting image enhancement and TAS features has the potential to deal with MC detection in ultrasound images efficiently and extend to the real-time localization and visualization of MCs.

  4. Probabilistic detection of volcanic ash using a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Mackie, Shona; Watson, Matthew

    2014-03-01

    Airborne volcanic ash can pose a hazard to aviation, agriculture, and both human and animal health. It is therefore important that ash clouds are monitored both day and night, even when they travel far from their source. Infrared satellite data provide perhaps the only means of doing this, and since the hugely expensive ash crisis that followed the 2010 Eyjafjalljökull eruption, much research has been carried out into techniques for discriminating ash in such data and for deriving key properties. Such techniques are generally specific to data from particular sensors, and most approaches result in a binary classification of pixels into "ash" and "ash free" classes with no indication of the classification certainty for individual pixels. Furthermore, almost all operational methods rely on expert-set thresholds to determine what constitutes "ash" and can therefore be criticized for being subjective and dependent on expertise that may not remain with an institution. Very few existing methods exploit available contemporaneous atmospheric data to inform the detection, despite the sensitivity of most techniques to atmospheric parameters. The Bayesian method proposed here does exploit such data and gives a probabilistic, physically based classification. We provide an example of the method's implementation for a scene containing both land and sea observations, and a large area of desert dust (often misidentified as ash by other methods). The technique has already been successfully applied to other detection problems in remote sensing, and this work shows that it will be a useful and effective tool for ash detection.

  5. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    PubMed

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  6. Spatiotemporal analysis of land use and land cover change in the Brazilian Amazon

    PubMed Central

    Li, Guiying; Moran, Emilio; Hetrick, Scott

    2013-01-01

    This paper provides a comparative analysis of land use and land cover (LULC) changes among three study areas with different biophysical environments in the Brazilian Amazon at multiple scales, from per-pixel, polygon, census sector, to study area. Landsat images acquired in the years of 1990/1991, 1999/2000, and 2008/2010 were used to examine LULC change trajectories with the post-classification comparison approach. A classification system composed of six classes – forest, savanna, other-vegetation (secondary succession and plantations), agro-pasture, impervious surface, and water, was designed for this study. A hierarchical-based classification method was used to classify Landsat images into thematic maps. This research shows different spatiotemporal change patterns, composition and rates among the three study areas and indicates the importance of analyzing LULC change at multiple scales. The LULC change analysis over time for entire study areas provides an overall picture of change trends, but detailed change trajectories and their spatial distributions can be better examined at a per-pixel scale. The LULC change at the polygon scale provides the information of the changes in patch sizes over time, while the LULC change at census sector scale gives new insights on how human-induced activities (e.g., urban expansion, roads, and land use history) affect LULC change patterns and rates. This research indicates the necessity to implement change detection at multiple scales for better understanding the mechanisms of LULC change patterns and rates. PMID:24127130

  7. Classification and analysis of the Rudaki's Area

    NASA Astrophysics Data System (ADS)

    Zambon, F.; De sanctis, M.; Capaccioni, F.; Filacchione, G.; Carli, C.; Ammannito, E.; Frigeri, A.

    2011-12-01

    During the first two MESSENGER flybys the Mercury Dual Imaging System (MDIS) has mapped 90% of the Mercury's surface. An effective way to study the different terrain on planetary surfaces is to apply classification methods. These are based on clustering algorithms and they can be divided in two categories: unsupervised and supervised. The unsupervised classifiers do not require the analyst feedback and the algorithm automatically organizes pixels values into classes. In the supervised method, instead, the analyst must choose the "training area" that define the pixels value of a given class. We applied an unsupervised classifier, ISODATA, to the WAC filter images of the Rudaki's area where several kind of terrain have been identified showing differences in albedo, topography and crater density. ISODATA classifier divides this region in four classes: 1) shadow regions, 2) rough regions, 3) smooth plane, 4) highest reflectance area. ISODATA can not distinguish the high albedo regions from highly reflective illuminated edge of the craters, however the algorithm identify four classes that can be considered different units mainly on the basis of their reflectances at the various wavelengths. Is not possible, instead, to extrapolate compositional information because of the absence of clear spectral features. An additional analysis was made using ISODATA to choose the "training area" for further supervised classifications. These approach would allow, for example, to separate more accurately the edge of the craters from the high reflectance areas and the low reflectance regions from the shadow areas.

  8. [Research on identification of cabbages and weeds combining spectral imaging technology and SAM taxonomy].

    PubMed

    Zu, Qin; Zhang, Shui-fa; Cao, Yang; Zhao, Hui-yi; Dang, Chang-qing

    2015-02-01

    Weeds automatic identification is the key technique and also the bottleneck for implementation of variable spraying and precision pesticide. Therefore, accurate, rapid and non-destructive automatic identification of weeds has become a very important research direction for precision agriculture. Hyperspectral imaging system was used to capture the hyperspectral images of cabbage seedlings and five kinds of weeds such as pigweed, barnyard grass, goosegrass, crabgrass and setaria with the wavelength ranging from 1000 to 2500 nm. In ENVI, by utilizing the MNF rotation to implement the noise reduction and de-correlation of hyperspectral data and reduce the band dimensions from 256 to 11, and extracting the region of interest to get the spectral library as standard spectra, finally, using the SAM taxonomy to identify cabbages and weeds, the classification effect was good when the spectral angle threshold was set as 0. 1 radians. In HSI Analyzer, after selecting the training pixels to obtain the standard spectrum, the SAM taxonomy was used to distinguish weeds from cabbages. Furthermore, in order to measure the recognition accuracy of weeds quantificationally, the statistical data of the weeds and non-weeds were obtained by comparing the SAM classification image with the best classification effects to the manual classification image. The experimental results demonstrated that, when the parameters were set as 5-point smoothing, 0-order derivative and 7-degree spectral angle, the best classification result was acquired and the recognition rate of weeds, non-weeds and overall samples was 80%, 97.3% and 96.8% respectively. The method that combined the spectral imaging technology and the SAM taxonomy together took full advantage of fusion information of spectrum and image. By applying the spatial classification algorithms to establishing training sets for spectral identification, checking the similarity among spectral vectors in the pixel level, integrating the advantages of spectra and images meanwhile considering their accuracy and rapidity and improving weeds detection range in the full range that could detect weeds between and within crop rows, the above method contributes relevant analysis tools and means to the application field requiring the accurate information of plants in agricultural precision management

  9. Conifer health classification for Colorado, 2008

    USGS Publications Warehouse

    Cole, Christopher J.; Noble, Suzanne M.; Blauer, Steven L.; Friesen, Beverly A.; Curry, Stacy E.; Bauer, Mark A.

    2010-01-01

    Colorado has undergone substantial changes in forests due to urbanization, wildfires, insect-caused tree mortality, and other human and environmental factors. The U.S. Geological Survey Rocky Mountain Geographic Science Center evaluated and developed a methodology for applying remotely-sensed imagery for assessing conifer health in Colorado. Two classes were identified for the purposes of this study: healthy and unhealthy (for example, an area the size of a 30- x 30-m pixel with 20 percent or greater visibly dead trees was defined as ?unhealthy?). Medium-resolution Landsat 5 Thematic Mapper imagery were collected. The normalized, reflectance-converted, cloud-filled Landsat scenes were merged to form a statewide image mosaic, and a Normalized Difference Vegetation Index (NDVI) and Renormalized Difference Infrared Index (RDII) were derived. A supervised maximum likelihood classification was done using the Landsat multispectral bands, the NDVI, the RDII, and 30-m U.S. Geological Survey National Elevation Dataset (NED). The classification was constrained to pixels identified in the updated landcover dataset as coniferous or mixed coniferous/deciduous vegetation. The statewide results were merged with a separate health assessment of Grand County, Colo., produced in late 2008. Sampling and validation was done by collecting field data and high-resolution imagery. The 86 percent overall classification accuracy attained in this study suggests that the data and methods used successfully characterized conifer conditions within Colorado. Although forest conditions for Lodgepole Pine (Pinus contorta) are easily characterized, classification uncertainty exists between healthy/unhealthy Ponderosa Pine (Pinus ponderosa), Pi?on (Pinus edulis), and Juniper (Juniperus sp.) vegetation. Some underestimation of conifer mortality in Summit County is likely, where recent (2008) cloud-free imagery was unavailable. These classification uncertainties are primarily due to the spatial and temporal resolution of Landsat, and of the NLCD derived from this sensor. It is believed that high- to moderate-resolution multispectral imagery, coupled with field data, could significantly reduce the uncertainty rates. The USGS produced a four-county follow-up conifer health assessment using high-resolution RapidEye remotely sensed imagery and field data collected in 2009.

  10. How a surgeon becomes superman by visualization of intelligently fused multi-modalities

    NASA Astrophysics Data System (ADS)

    Erat, Okan; Pauly, Olivier; Weidert, Simon; Thaller, Peter; Euler, Ekkehard; Mutschler, Wolf; Navab, Nassir; Fallavollita, Pascal

    2013-03-01

    Motivation: The existing visualization of the Camera augmented mobile C-arm (CamC) system does not have enough cues for depth information and presents the anatomical information in a confusing way to surgeons. Methods: We propose a method that segments anatomical information from X-ray and then augment it on the video images. To provide depth cues, pixels belonging to video images are classified as skin and object classes. The augmentation of anatomical information from X-ray is performed only when pixels have a larger probability of belonging to skin class. Results: We tested our algorithm by displaying the new visualization to 2 expert surgeons and 1 medical student during three surgical workflow sequences of the interlocking of intramedullary nail procedure, namely: skin incision, center punching, and drilling. Via a survey questionnaire, they were asked to assess the new visualization when compared to the current alphablending overlay image displayed by CamC. The participants all agreed (100%) that occlusion and instrument tip position detection were immediately improved with our technique. When asked if our visualization has potential to replace the existing alpha-blending overlay during interlocking procedures, all participants did not hesitate to suggest an immediate integration of the visualization for the correct navigation and guidance of the procedure. Conclusion: Current alpha blending visualizations lack proper depth cues and can be a source of confusion for the surgeons when performing surgery. Our visualization concept shows great potential in alleviating occlusion and facilitating clinician understanding during specific workflow steps of the intramedullary nailing procedure.

  11. Quantitative assessment of ischemia and reactive hyperemia of the dermal layers using multi - spectral imaging on the human arm

    NASA Astrophysics Data System (ADS)

    Kainerstorfer, Jana M.; Amyot, Franck; Demos, Stavros G.; Hassan, Moinuddin; Chernomordik, Victor; Hitzenberger, Christoph K.; Gandjbakhche, Amir H.; Riley, Jason D.

    2009-07-01

    Quantitative assessment of skin chromophores in a non-invasive fashion is often desirable. Especially pixel wise assessment of blood volume and blood oxygenation is beneficial for improved diagnostics. We utilized a multi-spectral imaging system for acquiring diffuse reflectance images of healthy volunteers' lower forearm. Ischemia and reactive hyperemia was introduced by occluding the upper arm with a pressure cuff for 5min with 180mmHg. Multi-spectral images were taken every 30s, before, during and after occlusion. Image reconstruction for blood volume and blood oxygenation was performed, using a two layered skin model. As the images were taken in a non-contact way, strong artifacts related to the shape (curvature) of the arms were observed, making reconstruction of optical / physiological parameters highly inaccurate. We developed a curvature correction method, which is based on extracting the curvature directly from the intensity images acquired and does not require any additional measures on the object imaged. The effectiveness of the algorithm was demonstrated, on reconstruction results of blood volume and blood oxygenation for in vivo data during occlusion of the arm. Pixel wise assessment of blood volume and blood oxygenation was made possible over the entire image area and comparison of occlusion effects between veins and surrounding skin was performed. Induced ischemia during occlusion and reactive hyperemia afterwards was observed and quantitatively assessed. Furthermore, the influence of epidermal thickness on reconstruction results was evaluated and the exact knowledge of this parameter for fully quantitative assessment was pointed out.

  12. [Scars, physiology, classification and assessment].

    PubMed

    Roques, Claude

    2013-01-01

    A skin scar is the sign of tissue repair following damage to the skin. Once formed, it follows a process of maturation which, after several months, results in a mature scar. This can be pathological with functional and/or aesthetic consequences. It is important to assess the scar as it matures in order to adapt the treatment to its evolution.

  13. A comparison of FIA plot data derived from image pixels and image objects

    Treesearch

    Charles E. Werstak

    2012-01-01

    The use of Forest Inventory and Analysis (FIA) plot data for producing continuous and thematic maps of forest attributes (e.g., forest type, canopy cover, volume, and biomass) at the regional level from satellite imagery can be challenging due to differences in scale. Specifically, classification errors that may result from assumptions made between what the field data...

  14. Monitoring urban tree cover using object-based image analysis and public domain remotely sensed data

    Treesearch

    L. Monika Moskal; Diane M. Styers; Meghan Halabisky

    2011-01-01

    Urban forest ecosystems provide a range of social and ecological services, but due to the heterogeneity of these canopies their spatial extent is difficult to quantify and monitor. Traditional per-pixel classification methods have been used to map urban canopies, however, such techniques are not generally appropriate for assessing these highly variable landscapes....

  15. Detection of Olea europaea subsp. cuspidata and Juniperus procera in the dry Afromontane forest of northern Ethiopia using subpixel analysis of Landsat imagery

    NASA Astrophysics Data System (ADS)

    Hishe, Hadgu; Giday, Kidane; Neka, Mulugeta; Soromessa, Teshome; Van Orshoven, Jos; Muys, Bart

    2015-01-01

    Comprehensive and less costly forest inventory approaches are required to monitor the spatiotemporal dynamics of key species in forest ecosystems. Subpixel analysis using the earth resources data analysis system imagine subpixel classification procedure was tested to extract Olea europaea subsp. cuspidata and Juniperus procera canopies from Landsat 7 enhanced thematic mapper plus imagery. Control points with various canopy area fractions of the target species were collected to develop signatures for each of the species. With these signatures, the imagine subpixel classification procedure was run for each species independently. The subpixel process enabled the detection of O. europaea subsp. cuspidata and J. procera trees in pure and mixed pixels. Total of 100 pixels each were field verified for both species. An overall accuracy of 85% was achieved for O. europaea subsp. cuspidata and 89% for J. procera. A high overall accuracy level of detecting species at a natural forest was achieved, which encourages using the algorithm for future species monitoring activities. We recommend that the algorithm has to be validated in similar environment to enrich the knowledge on its capability to ensure its wider usage.

  16. Clinical case of the month. A young man with a persistent skin eruption. Mycosis fungoides.

    PubMed

    Kendrick, Christina G; Gerdes, Michelle S; Lopez, Fred A; McBurney, Elizabeth I

    2004-01-01

    There are many types of skin disease that fit into the classification of cutaneous lymphoma, but mycosis fungoides is by far the most common of this group. It is a non-Hodgkin's lymphoma of T-cell origin that presents in the skin. Mycosis fungoides often evolves for years without a specific diagnosis because it can present as an eczematous or psoriasiform eruption. Patients identified in the early stages and treated appropriately have a normal life expectancy.

  17. Classification of large-scale fundus image data sets: a cloud-computing framework.

    PubMed

    Roychowdhury, Sohini

    2016-08-01

    Large medical image data sets with high dimensionality require substantial amount of computation time for data creation and data processing. This paper presents a novel generalized method that finds optimal image-based feature sets that reduce computational time complexity while maximizing overall classification accuracy for detection of diabetic retinopathy (DR). First, region-based and pixel-based features are extracted from fundus images for classification of DR lesions and vessel-like structures. Next, feature ranking strategies are used to distinguish the optimal classification feature sets. DR lesion and vessel classification accuracies are computed using the boosted decision tree and decision forest classifiers in the Microsoft Azure Machine Learning Studio platform, respectively. For images from the DIARETDB1 data set, 40 of its highest-ranked features are used to classify four DR lesion types with an average classification accuracy of 90.1% in 792 seconds. Also, for classification of red lesion regions and hemorrhages from microaneurysms, accuracies of 85% and 72% are observed, respectively. For images from STARE data set, 40 high-ranked features can classify minor blood vessels with an accuracy of 83.5% in 326 seconds. Such cloud-based fundus image analysis systems can significantly enhance the borderline classification performances in automated screening systems.

  18. Landsat Thematic Mapper studies of land cover spatial variability related to hydrology

    NASA Technical Reports Server (NTRS)

    Wharton, S.; Ormsby, J.; Salomonson, V.; Mulligan, P.

    1984-01-01

    Past accomplishments involving remote sensing based land-cover analysis for hydrologic applications are reviewed. Ongoing research in exploiting the increased spatial, radiometric, and spectral capabilities afforded by the TM on Landsats 4 and 5 is considered. Specific studies to compare MSS and TM for urbanizing watersheds, wetlands, and floodplain mapping situations show that only a modest improvement in classification accuracy is achieved via statistical per pixel multispectral classifiers. The limitations of current approaches to multispectral classification are illustrated. The objectives, background, and progress in the development of an alternative analysis approach for defining inputs to urban hydrologic models using TM are discussed.

  19. Landscape object-based analysis of wetland plant functional types: the effects of spatial scale, vegetation classes and classifier methods

    NASA Astrophysics Data System (ADS)

    Dronova, I.; Gong, P.; Wang, L.; Clinton, N.; Fu, W.; Qi, S.

    2011-12-01

    Remote sensing-based vegetation classifications representing plant function such as photosynthesis and productivity are challenging in wetlands with complex cover and difficult field access. Recent advances in object-based image analysis (OBIA) and machine-learning algorithms offer new classification tools; however, few comparisons of different algorithms and spatial scales have been discussed to date. We applied OBIA to delineate wetland plant functional types (PFTs) for Poyang Lake, the largest freshwater lake in China and Ramsar wetland conservation site, from 30-m Landsat TM scene at the peak of spring growing season. We targeted major PFTs (C3 grasses, C3 forbs and different types of C4 grasses and aquatic vegetation) that are both key players in system's biogeochemical cycles and critical providers of waterbird habitat. Classification results were compared among: a) several object segmentation scales (with average object sizes 900-9000 m2); b) several families of statistical classifiers (including Bayesian, Logistic, Neural Network, Decision Trees and Support Vector Machines) and c) two hierarchical levels of vegetation classification, a generalized 3-class set and more detailed 6-class set. We found that classification benefited from object-based approach which allowed including object shape, texture and context descriptors in classification. While a number of classifiers achieved high accuracy at the finest pixel-equivalent segmentation scale, the highest accuracies and best agreement among algorithms occurred at coarser object scales. No single classifier was consistently superior across all scales, although selected algorithms of Neural Network, Logistic and K-Nearest Neighbors families frequently provided the best discrimination of classes at different scales. The choice of vegetation categories also affected classification accuracy. The 6-class set allowed for higher individual class accuracies but lower overall accuracies than the 3-class set because individual classes differed in scales at which they were best discriminated from others. Main classification challenges included a) presence of C3 grasses in C4-grass areas, particularly following harvesting of C4 reeds and b) mixtures of emergent, floating and submerged aquatic plants at sub-object and sub-pixel scales. We conclude that OBIA with advanced statistical classifiers offers useful instruments for landscape vegetation analyses, and that spatial scale considerations are critical in mapping PFTs, while multi-scale comparisons can be used to guide class selection. Future work will further apply fuzzy classification and field-collected spectral data for PFT analysis and compare results with MODIS PFT products.

  20. Minimization of color shift generated in RGBW quad structure.

    NASA Astrophysics Data System (ADS)

    Kim, Hong Chul; Yun, Jae Kyeong; Baek, Heume-Il; Kim, Ki Duk; Oh, Eui Yeol; Chung, In Jae

    2005-03-01

    The purpose of RGBW Quad Structure Technology is to realize higher brightness than that of normal panel (RGB stripe structure) by adding white sub-pixel to existing RGB stripe structure. However, there is side effect called 'color shift' resulted from increasing brightness. This side effect degrades general color characteristics due to change of 'Hue', 'Brightness' and 'Saturation' as compared with existing RGB stripe structure. Especially, skin-tone colors show a tendency to get darker in contrast to normal panel. We"ve tried to minimize 'color shift' through use of LUT (Look Up Table) for linear arithmetic processing of input data, data bit expansion to 12-bit for minimizing arithmetic tolerance and brightness weight of white sub-pixel on each R, G, B pixel. The objective of this study is to minimize and keep Δu'v' value (we commonly use to represent a color difference), quantitative basis of color difference between RGB stripe structure and RGBW quad structure, below 0.01 level (existing 0.02 or higher) using Macbeth colorchecker that is general reference of color characteristics.

  1. Skin temperature evaluation by infrared thermography: Comparison of two image analysis methods during the nonsteady state induced by physical exercise

    NASA Astrophysics Data System (ADS)

    Formenti, Damiano; Ludwig, Nicola; Rossi, Alessio; Trecroci, Athos; Alberti, Giampietro; Gargano, Marco; Merla, Arcangelo; Ammer, Kurt; Caumo, Andrea

    2017-03-01

    The most common method to derive a temperature value from a thermal image in humans is the calculation of the average of the temperature values of all the pixels confined within a demarcated boundary defined region of interest (ROI). Such summary measure of skin temperature is denoted as Troi in this study. Recently, an alternative method for the derivation of skin temperature from the thermal image has been developed. Such novel method (denoted as Tmax) is based on an automated (software-driven) selection of the warmest pixels within the ROI. Troi and Tmax have been compared under basal, steady-state conditions, resulting very well correlated and characterized by a bias of approximately 1 °C (Tmax > Troi). Aim of this study was to investigate the relationship between Tmax and Troi under the nonsteady-state conditions induced by physical exercise. Thermal images of quadriceps of 13 subjects performing a squat exercise were recorded for 120 s before (basal steady state) and for 480 s after the initiation of the exercise (nonsteady state). The thermal images were then analysed to extract Troi and Tmax. Troi and Tmax changed almost in parallel during the nonstead -state. At a closer inspection, it was found that during the nonsteady state the bias between the two methods slightly increased (from 0.7 to 1.1 °C) and the degree of association between them slightly decreased (from Pearson's r = 0.96 to 0.83). Troi and Tmax had different relationships with the skin temperature histogram. Whereas Tmax was the mean, which could be interpreted as the centre of gravity of the histogram, Tmax was related with the extreme upper tail of the histogram. During the nonsteady state, the histogram increased its spread and became slightly more asymmetric. As a result, Troi deviated a little from the 50th percentile, while Tmax remained constantly higher than the 95th percentile. Despite their differences, Troi and Tmax showed a substantial agreement in assessing the changes in skin temperature following physical exercise. Further studies are needed to clarify the relationship existing among Tmax, Troi and cutaneous blood flow during physical exercise.

  2. High correlation of double Debye model parameters in skin cancer detection.

    PubMed

    Truong, Bao C Q; Tuan, H D; Fitzgerald, Anthony J; Wallace, Vincent P; Nguyen, H T

    2014-01-01

    The double Debye model can be used to capture the dielectric response of human skin in terahertz regime due to high water content in the tissue. The increased water proportion is widely considered as a biomarker of carcinogenesis, which gives rise of using this model in skin cancer detection. Therefore, the goal of this paper is to provide a specific analysis of the double Debye parameters in terms of non-melanoma skin cancer classification. Pearson correlation is applied to investigate the sensitivity of these parameters and their combinations to the variation in tumor percentage of skin samples. The most sensitive parameters are then assessed by using the receiver operating characteristic (ROC) plot to confirm their potential of classifying tumor from normal skin. Our positive outcomes support further steps to clinical application of terahertz imaging in skin cancer delineation.

  3. LATEST LASER AND LIGHT-BASED ADVANCES FOR ETHNIC SKIN REJUVENATION

    PubMed Central

    Elsaie, Mohamed Lotfy; Lloyd, Heather Woolery

    2008-01-01

    Background: Advances in nonablative skin rejuvenation technologies have sparked a renewed interest in the cosmetic treatment of aging skin. More options exist now than ever before to reverse cutaneous changes caused by long-term exposure to sunlight. Although Caucasian skin is more prone to ultraviolet light injury, ethnic skin (typically classified as types IV to VI) also exhibits characteristic photoaging changes. Widespread belief that inevitable or irreversible textural changes or dyspigmentation occurs following laser- or light-based treatments, has been challenged in recent years by new classes of devices capable of protecting the epidermis from injury during treatment. Objective: The purpose of this article is to review recent clinical advances in the treatment of photoaging changes in ethnic skin. This article provides a basis for the classification of current advances in nonablative management of ethnic skin. PMID:19881986

  4. Promoting new concepts of skincare via skinomics and systems biology-From traditional skincare and efficacy-based skincare to precision skincare.

    PubMed

    Jiang, Biao; Jia, Yan; He, Congfen

    2018-05-11

    Traditional skincare involves the subjective classification of skin into 4 categories (oily, dry, mixed, and neutral) prior to skin treatment. Following the development of noninvasive methods in skin and skin imaging technology, scientists have developed efficacy-based skincare products based on the physiological characteristics of skin under different conditions. Currently, the emergence of skinomics and systems biology has facilitated the development of precision skincare. In this article, the evolution of skincare based on the physiological states of the skin (from traditional skincare and efficacy-based skincare to precision skincare) is described. In doing so, we highlight skinomics and systems biology, with particular emphasis on the importance of skin lipidomics and microbiomes in precision skincare. The emerging trends of precision skincare are anticipated. © 2018 Wiley Periodicals, Inc.

  5. Validation study of the in vitro skin irritation test with the LabCyte EPI-MODEL24.

    PubMed

    Kojima, Hajime; Ando, Yoko; Idehara, Kenji; Katoh, Masakazu; Kosaka, Tadashi; Miyaoka, Etsuyoshi; Shinoda, Shinsuke; Suzuki, Tamie; Yamaguchi, Yoshihiro; Yoshimura, Isao; Yuasa, Atsuko; Watanabe, Yukihiko; Omori, Takashi

    2012-03-01

    A validation study on an in vitro skin irritation assay was performed with the reconstructed human epidermis (RhE) LabCyte EPI-MODEL24, developed by Japan Tissue Engineering Co. Ltd (Gamagori, Japan). The protocol that was followed in the current study was an optimised version of the EpiSkin protocol (LabCyte assay). According to the United Nations Globally Harmonised System (UN GHS) of classification for assessing the skin irritation potential of a chemical, 12 irritants and 13 non-irritants were validated by a minimum of six laboratories from the Japanese Society for Alternatives to Animal Experiments (JSAAE) skin irritation assay validation study management team (VMT). The 25 chemicals were listed in the European Centre for the Validation of Alternative Methods (ECVAM) performance standards. The reconstructed tissues were exposed to the chemicals for 15 minutes and incubated for 42 hours in fresh culture medium. Subsequently, the level of interleukin-1 alpha (IL-1 α) present in the conditioned medium was measured, and tissue viability was assessed by using the MTT assay. The results of the MTT assay obtained with the LabCyte EPI-MODEL24 (LabCyte MTT assay) demonstrated high within-laboratory and between-laboratory reproducibility, as well as high accuracy for use as a stand-alone assay to distinguish skin irritants from non-irritants. In addition, the IL-1α release measurements in the LabCyte assay were clearly unnecessary for the success of this model in the classification of chemicals for skin irritation potential. 2012 FRAME.

  6. Analysis of hyperspectral fluorescence images for poultry skin tumor inspection

    NASA Astrophysics Data System (ADS)

    Kong, Seong G.; Chen, Yud-Ren; Kim, Intaek; Kim, Moon S.

    2004-02-01

    We present a hyperspectral fluorescence imaging system with a fuzzy inference scheme for detecting skin tumors on poultry carcasses. Hyperspectral images reveal spatial and spectral information useful for finding pathological lesions or contaminants on agricultural products. Skin tumors are not obvious because the visual signature appears as a shape distortion rather than a discoloration. Fluorescence imaging allows the visualization of poultry skin tumors more easily than reflectance. The hyperspectral image samples obtained for this poultry tumor inspection contain 65 spectral bands of fluorescence in the visible region of the spectrum at wavelengths ranging from 425 to 711 nm. The large amount of hyperspectral image data is compressed by use of a discrete wavelet transform in the spatial domain. Principal-component analysis provides an effective compressed representation of the spectral signal of each pixel in the spectral domain. A small number of significant features are extracted from two major spectral peaks of relative fluorescence intensity that have been identified as meaningful spectral bands for detecting tumors. A fuzzy inference scheme that uses a small number of fuzzy rules and Gaussian membership functions successfully detects skin tumors on poultry carcasses. Spatial-filtering techniques are used to significantly reduce false positives.

  7. Ag/alginate nanofiber membrane for flexible electronic skin

    NASA Astrophysics Data System (ADS)

    Hu, Wei-Peng; Zhang, Bin; Zhang, Jun; Luo, Wei-Ling; Guo, Ya; Chen, Shao-Juan; Yun, Mao-Jin; Ramakrishna, Seeram; Long, Yun-Ze

    2017-11-01

    Flexible electronic skin has stimulated significant interest due to its widespread applications in the fields of human-machine interactivity, smart robots and health monitoring. As typical elements of electrical skin, the fabrication process of most pressure sensors combined nanomaterials and PDMS films are redundant, expensive and complicated, and their unknown biological toxicity could not be widely used in electronic skin. Hence, we report a novel, cost-effective and antibacterial approach to immobilizing silver nanoparticles into-electrospun Na-alginate nanofibers. Due to the unique role of carboxyl and hydroxyl groups in Na-alginate, the silver nanopaticles with 30 nm size in diameter were uniformly distributed inside and outside the alginate nanofibers, which obtained pressure sensor shows stable response, including an ultralow detection limited (1 pa) and high durability (>1000 cycles). Notably, the pressure sensor fabricated by these Ag/alginate nanofibers could not only follow human respiration but also accurately distinguish words like ‘Nano’ and ‘Perfect’ spoke by a tester. Interestingly, the pixelated sensor arrays based on these Ag/alginate nanofibers could monitor distribution of objects and reflect their weight by measuring the different current values. Moreover, these Ag/alginate nanofibers exhibit great antibacterial activity, implying the great potential application in artificial electronic skin.

  8. Fully printed flexible fingerprint-like three-axis tactile and slip force and temperature sensors for artificial skin.

    PubMed

    Harada, Shingo; Kanao, Kenichiro; Yamamoto, Yuki; Arie, Takayuki; Akita, Seiji; Takei, Kuniharu

    2014-12-23

    A three-axis tactile force sensor that determines the touch and slip/friction force may advance artificial skin and robotic applications by fully imitating human skin. The ability to detect slip/friction and tactile forces simultaneously allows unknown objects to be held in robotic applications. However, the functionalities of flexible devices have been limited to a tactile force in one direction due to difficulties fabricating devices on flexible substrates. Here we demonstrate a fully printed fingerprint-like three-axis tactile force and temperature sensor for artificial skin applications. To achieve economic macroscale devices, these sensors are fabricated and integrated using only printing methods. Strain engineering enables the strain distribution to be detected upon applying a slip/friction force. By reading the strain difference at four integrated force sensors for a pixel, both the tactile and slip/friction forces can be analyzed simultaneously. As a proof of concept, the high sensitivity and selectivity for both force and temperature are demonstrated using a 3×3 array artificial skin that senses tactile, slip/friction, and temperature. Multifunctional sensing components for a flexible device are important advances for both practical applications and basic research in flexible electronics.

  9. An algorithm for improving the quality of structural images of turbid media in endoscopic optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Potlov, A. Yu.; Frolov, S. V.; Proskurin, S. G.

    2018-04-01

    High-quality OCT structural images reconstruction algorithm for endoscopic optical coherence tomography of biological tissue is described. The key features of the presented algorithm are: (1) raster scanning and averaging of adjacent Ascans and pixels; (2) speckle level minimization. The described algorithm can be used in the gastroenterology, urology, gynecology, otorhinolaryngology for mucous membranes and skin diagnostics in vivo and in situ.

  10. UNMANNED AERIAL VEHICLE (UAV) HYPERSPECTRAL REMOTE SENSING FOR DRYLAND VEGETATION MONITORING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nancy F. Glenn; Jessica J. Mitchell; Matthew O. Anderson

    2012-06-01

    UAV-based hyperspectral remote sensing capabilities developed by the Idaho National Lab and Idaho State University, Boise Center Aerospace Lab, were recently tested via demonstration flights that explored the influence of altitude on geometric error, image mosaicking, and dryland vegetation classification. The test flights successfully acquired usable flightline data capable of supporting classifiable composite images. Unsupervised classification results support vegetation management objectives that rely on mapping shrub cover and distribution patterns. Overall, supervised classifications performed poorly despite spectral separability in the image-derived endmember pixels. Future mapping efforts that leverage ground reference data, ultra-high spatial resolution photos and time series analysis shouldmore » be able to effectively distinguish native grasses such as Sandberg bluegrass (Poa secunda), from invasives such as burr buttercup (Ranunculus testiculatus) and cheatgrass (Bromus tectorum).« less

  11. Automatic differentiation of melanoma and clark nevus skin lesions

    NASA Astrophysics Data System (ADS)

    LeAnder, R. W.; Kasture, A.; Pandey, A.; Umbaugh, S. E.

    2007-03-01

    Skin cancer is the most common form of cancer in the United States. Although melanoma accounts for just 11% of all types of skin cancer, it is responsible for most of the deaths, claiming more than 7910 lives annually. Melanoma is visually difficult for clinicians to differentiate from Clark nevus lesions which are benign. The application of pattern recognition techniques to these lesions may be useful as an educational tool for teaching physicians to differentiate lesions, as well as for contributing information about the essential optical characteristics that identify them. Purpose: This study sought to find the most effective features to extract from melanoma, melanoma in situ and Clark nevus lesions, and to find the most effective pattern-classification criteria and algorithms for differentiating those lesions, using the Computer Vision and Image Processing Tools (CVIPtools) software package. Methods: Due to changes in ambient lighting during the photographic process, color differences between images can occur. These differences were minimized by capturing dermoscopic images instead of photographic images. Differences in skin color between patients were minimized via image color normalization, by converting original color images to relative-color images. Relative-color images also helped minimize changes in color that occur due to changes in the photographic and digitization processes. Tumors in the relative-color images were segmented and morphologically filtered. Filtered, relative-color, tumor features were then extracted and various pattern-classification schemes were applied. Results: Experimentation resulted in four useful pattern classification methods, the best of which was an overall classification rate of 100% for melanoma and melanoma in situ (grouped) and 60% for Clark nevus. Conclusion: Melanoma and melanoma in situ have feature parameters and feature values that are similar enough to be considered one class of tumor that significantly differs from Clark nevus. Consequently, grouping melanoma and melanoma in situ together achieves the best results in classifying and automatically differentiating melanoma from Clark nevus lesions.

  12. Single-Dose Oritavancin Treatment of Acute Bacterial Skin and Skin Structure Infections: SOLO Trial Efficacy by Eron Severity and Management Setting.

    PubMed

    Deck, Daniel H; Jordan, Jennifer M; Holland, Thomas L; Fan, Weihong; Wikler, Matthew A; Sulham, Katherine A; Ralph Corey, G

    2016-09-01

    Introduction of new antibiotics enabling single-dose administration, such as oritavancin may significantly impact site of care decisions for patients with acute bacterial skin and skin structure infections (ABSSSI). This analysis compared the efficacy of single-dose oritavancin with multiple-dose vancomycin in patients categorized according to disease severity via modified Eron classification and management setting. SOLO I and II were phase 3 studies evaluating single-dose oritavancin versus 7-10 days of vancomycin for treatment of ABSSSI. Patient characteristics were collected at baseline and retrospectively analyzed. Study protocols were amended, allowing outpatient management at the discretion of investigators. In this post hoc analysis, patients were categorized according to a modified Eron severity classification and management setting (outpatient vs. inpatient) and the efficacy compared. Overall, 1910 patients in the SOLO trials were categorized into Class I (520, 26.5%), II (790, 40.3%), and III (600, 30.6%). Of the 767 patients (40%) in the SOLO trials who were managed entirely in the outpatient setting 40.3% were categorized as Class II and 30.6% were Class III. Clinical efficacy was similar between oritavancin and vancomycin treatment groups, regardless of severity classification and across inpatient and outpatient settings. Class III patients had lower response rates (oritavancin 73.3%, vancomycin 76.6%) at early clinical evaluation when compared to patients in Class I (82.6%) or II (86.1%); however, clinical cure rates at the post-therapy evaluation were similar for Class III patients (oritavancin 79.8%, vancomycin 79.9%) when compared to Class I and II patients (79.1-85.7%). Single-dose oritavancin therapy results in efficacy comparable to multiple-dose vancomycin in patients categorized according to modified Eron disease severity classification regardless of whether management occurred in the inpatient or outpatient setting. The Medicines Company, Parsippany, NJ, USA. ClinicalTrials.gov identifiers, NCT01252719 (SOLO I) and NCT01252732 (SOLO II).

  13. A robust object-based shadow detection method for cloud-free high resolution satellite images over urban areas and water bodies

    NASA Astrophysics Data System (ADS)

    Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad

    2018-06-01

    Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.

  14. A method of minimum volume simplex analysis constrained unmixing for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao

    2017-07-01

    The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.

  15. Change detection from synthetic aperture radar images based on neighborhood-based ratio and extreme learning machine

    NASA Astrophysics Data System (ADS)

    Gao, Feng; Dong, Junyu; Li, Bo; Xu, Qizhi; Xie, Cui

    2016-10-01

    Change detection is of high practical value to hazard assessment, crop growth monitoring, and urban sprawl detection. A synthetic aperture radar (SAR) image is the ideal information source for performing change detection since it is independent of atmospheric and sunlight conditions. Existing SAR image change detection methods usually generate a difference image (DI) first and use clustering methods to classify the pixels of DI into changed class and unchanged class. Some useful information may get lost in the DI generation process. This paper proposed an SAR image change detection method based on neighborhood-based ratio (NR) and extreme learning machine (ELM). NR operator is utilized for obtaining some interested pixels that have high probability of being changed or unchanged. Then, image patches centered at these pixels are generated, and ELM is employed to train a model by using these patches. Finally, pixels in both original SAR images are classified by the pretrained ELM model. The preclassification result and the ELM classification result are combined to form the final change map. The experimental results obtained on three real SAR image datasets and one simulated dataset show that the proposed method is robust to speckle noise and is effective to detect change information among multitemporal SAR images.

  16. Novel approach for image skeleton and distance transformation parallel algorithms

    NASA Astrophysics Data System (ADS)

    Qing, Kent P.; Means, Robert W.

    1994-05-01

    Image Understanding is more important in medical imaging than ever, particularly where real-time automatic inspection, screening and classification systems are installed. Skeleton and distance transformations are among the common operations that extract useful information from binary images and aid in Image Understanding. The distance transformation describes the objects in an image by labeling every pixel in each object with the distance to its nearest boundary. The skeleton algorithm starts from the distance transformation and finds the set of pixels that have a locally maximum label. The distance algorithm has to scan the entire image several times depending on the object width. For each pixel, the algorithm must access the neighboring pixels and find the maximum distance from the nearest boundary. It is a computational and memory access intensive procedure. In this paper, we propose a novel parallel approach to the distance transform and skeleton algorithms using the latest VLSI high- speed convolutional chips such as HNC's ViP. The algorithm speed is dependent on the object's width and takes (k + [(k-1)/3]) * 7 milliseconds for a 512 X 512 image with k being the maximum distance of the largest object. All objects in the image will be skeletonized at the same time in parallel.

  17. A comparison of all-weather land surface temperature products

    NASA Astrophysics Data System (ADS)

    Martins, Joao; Trigo, Isabel F.; Ghilain, Nicolas; Goettche, Frank-M.; Ermida, Sofia; Olesen, Folke-S.; Gellens-Meulenberghs, Françoise; Arboleda, Alirio

    2017-04-01

    The Satellite Application Facility on Land Surface Analysis (LSA-SAF, http://landsaf.ipma.pt) has been providing land surface temperature (LST) estimates using SEVIRI/MSG on an operational basis since 2006. The LSA-SAF service has since been extended to provide a wide range of satellite-based quantities over land surfaces, such as emissivity, albedo, radiative fluxes, vegetation state, evapotranspiration, and fire-related variables. Being based on infra-red measurements, the SEVIRI/MSG LST product is limited to clear-sky pixels only. Several all-weather LST products have been proposed by the scientific community either based on microwave observations or using Soil-Vegetation-Atmosphere Transfer models to fill the gaps caused by clouds. The goal of this work is to provide a nearly gap-free operational all-weather LST product and compare these approaches. In order to estimate evapotranspiration and turbulent energy fluxes, the LSA-SAF solves the surface energy budget for each SEVIRI pixel, taking into account the physical and physiological processes occurring in vegetation canopies. This task is accomplished with an adapted SVAT model, which adopts some formulations and parameters of the Tiled ECMWF Scheme for Surface Exchanges over Land (TESSEL) model operated at the European Center for Medium-range Weather Forecasts (ECMWF), and using: 1) radiative inputs also derived by LSA-SAF, which includes surface albedo, down-welling fluxes and fire radiative power; 2) a land-surface characterization obtained by combining the ECOCLIMAP database with both LSA-SAF vegetation products and the H(ydrology)-SAF snow mask; 3) meteorological fields from ECMWF forecasts interpolated to SEVIRI pixels, and 4) soil moisture derived by the H-SAF and LST from LSA-SAF. A byproduct of the SVAT model is surface skin temperature, which is needed to close the surface energy balance. The model skin temperature corresponds to the radiative temperature of the interface between soil and atmosphere, which is assumed to have no heat storage. The modelled skin temperatures are in fair agreement with LST directly estimated from SEVIRI observations. However, in contrast to LST retrievals from SEVIRI/MSG (or other infrared sensors) the SVAT model solves the energy budget equation under all-sky conditions. The SVAT surface skin temperature is then used to fill gaps in LST fields caused by clouds. Since under cloudy conditions the direct incoming solar radiation is greatly reduced, thermal balance at the surface is more easily achieved and directional effects are also less important. Therefore, a better performance of the model skin temperature may be expected. In contrast, under clear skies the satellite LST showed to be more reliable, since the SVAT model shows biases in the daily amplitude of the skin temperature. In the context of the GlobTemperature project (http://www.globtemperature.info/), all-weather LST datasets using AMSR-E microwave radiances were produced, which are compared here to the SVAT-based LST. Both products were validated against in situ data - particularly from Gobabeb & Farm Heimat (Namibia), and Évora (Portugal) - to show that under cloudy conditions the agreement between in-situ LST and modelled skin temperature is acceptable. Compared to the SVAT-based LST, AMSR-E LST is closer to satellite observations (level 2 product); the complementarity of the two approaches is assessed.

  18. Evaluating an ensemble classification approach for crop diversity verification in Danish greening subsidy control

    NASA Astrophysics Data System (ADS)

    Chellasamy, Menaka; Ferré, Ty Paul Andrew; Greve, Mogens Humlekrog

    2016-07-01

    Beginning in 2015, Danish farmers are obliged to meet specific crop diversification rules based on total land area and number of crops cultivated to be eligible for new greening subsidies. Hence, there is a need for the Danish government to extend their subsidy control system to verify farmers' declarations to warrant greening payments under the new crop diversification rules. Remote Sensing (RS) technology has been used since 1992 to control farmers' subsidies in Denmark. However, a proper RS-based approach is yet to be finalised to validate new crop diversity requirements designed for assessing compliance under the recent subsidy scheme (2014-2020); This study uses an ensemble classification approach (proposed by the authors in previous studies) for validating the crop diversity requirements of the new rules. The approach uses a neural network ensemble classification system with bi-temporal (spring and early summer) WorldView-2 imagery (WV2) and includes the following steps: (1) automatic computation of pixel-based prediction probabilities using multiple neural networks; (2) quantification of the classification uncertainty using Endorsement Theory (ET); (3) discrimination of crop pixels and validation of the crop diversification rules at farm level; and (4) identification of farmers who are violating the requirements for greening subsidies. The prediction probabilities are computed by a neural network ensemble supplied with training samples selected automatically using farmers declared parcels (field vectors containing crop information and the field boundary of each crop). Crop discrimination is performed by considering a set of conclusions derived from individual neural networks based on ET. Verification of the diversification rules is performed by incorporating pixel-based classification uncertainty or confidence intervals with the class labels at the farmer level. The proposed approach was tested with WV2 imagery acquired in 2011 for a study area in Vennebjerg, Denmark, containing 132 farmers, 1258 fields, and 18 crops. The classification results obtained show an overall accuracy of 90.2%. The RS-based results suggest that 36 farmers did not follow the crop diversification rules that would qualify for the greening subsidies. When compared to the farmers' reported crop mixes, irrespective of the rule, the RS results indicate that false crop declarations were made by 8 farmers, covering 15 fields. If the farmers' reports had been submitted for the new greening subsidies, 3 farmers would have made a false claim; while remaining 5 farmers obey the rules of required crop proportion even though they have submitted the false crop code due to their small holding size. The RS results would have supported 96 farmers for greening subsidy claims, with no instances of suggesting a greening subsidy for a holding that the farmer did not report as meeting the required conditions. These results suggest that the proposed RS based method shows great promise for validating the new greening subsidies in Denmark.

  19. Classification of Informal Settlements Through the Integration of 2d and 3d Features Extracted from Uav Data

    NASA Astrophysics Data System (ADS)

    Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.

    2016-06-01

    Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  20. "Relative CIR": an image enhancement and visualization technique

    USGS Publications Warehouse

    Fleming, Michael D.

    1993-01-01

    Many techniques exist to spectrally and spatially enhance digital multispectral scanner data. One technique enhances an image while keeping the colors as they would appear in a color-infrared (CIR) image. This "relative CIR" technique generates an image that is both spectrally and spatially enhanced, while displaying a maximum range of colors. The technique enables an interpreter to visualize either spectral or land cover classes by their relative CIR characteristics. A relative CIR image is generated by developed spectral statistics for each class in the classifications and then, using a nonparametric approach for spectral enhancement, the means of the classes for each band are ranked. A 3 by 3 pixel smoothing filter is applied to the classification for spatial enhancement and the classes are mapped to the representative rank for each band. Practical applications of the technique include displaying an image classification product as a CIR image that was not derived directly from a spectral image, visualizing how a land cover classification would look as a CIR image, and displaying a spectral classification or intermediate product that will be used to label spectral classes.

Top