Sample records for phenotype image classification

  1. Hierarchical classification strategy for Phenotype extraction from epidermal growth factor receptor endocytosis screening.

    PubMed

    Cao, Lu; Graauw, Marjo de; Yan, Kuan; Winkel, Leah; Verbeek, Fons J

    2016-05-03

    Endocytosis is regarded as a mechanism of attenuating the epidermal growth factor receptor (EGFR) signaling and of receptor degradation. There is increasing evidence becoming available showing that breast cancer progression is associated with a defect in EGFR endocytosis. In order to find related Ribonucleic acid (RNA) regulators in this process, high-throughput imaging with fluorescent markers is used to visualize the complex EGFR endocytosis process. Subsequently a dedicated automatic image and data analysis system is developed and applied to extract the phenotype measurement and distinguish different developmental episodes from a huge amount of images acquired through high-throughput imaging. For the image analysis, a phenotype measurement quantifies the important image information into distinct features or measurements. Therefore, the manner in which prominent measurements are chosen to represent the dynamics of the EGFR process becomes a crucial step for the identification of the phenotype. In the subsequent data analysis, classification is used to categorize each observation by making use of all prominent measurements obtained from image analysis. Therefore, a better construction for a classification strategy will support to raise the performance level in our image and data analysis system. In this paper, we illustrate an integrated analysis method for EGFR signalling through image analysis of microscopy images. Sophisticated wavelet-based texture measurements are used to obtain a good description of the characteristic stages in the EGFR signalling. A hierarchical classification strategy is designed to improve the recognition of phenotypic episodes of EGFR during endocytosis. Different strategies for normalization, feature selection and classification are evaluated. The results of performance assessment clearly demonstrate that our hierarchical classification scheme combined with a selected set of features provides a notable improvement in the temporal analysis of EGFR endocytosis. Moreover, it is shown that the addition of the wavelet-based texture features contributes to this improvement. Our workflow can be applied to drug discovery to analyze defected EGFR endocytosis processes.

  2. Justification of Fuzzy Declustering Vector Quantization Modeling in Classification of Genotype-Image Phenotypes

    NASA Astrophysics Data System (ADS)

    Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo

    2010-01-01

    With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.

  3. A real-time phenotyping framework using machine learning for plant stress severity rating in soybean.

    PubMed

    Naik, Hsiang Sing; Zhang, Jiaoping; Lofquist, Alec; Assefa, Teshale; Sarkar, Soumik; Ackerman, David; Singh, Arti; Singh, Asheesh K; Ganapathysubramanian, Baskar

    2017-01-01

    Phenotyping is a critical component of plant research. Accurate and precise trait collection, when integrated with genetic tools, can greatly accelerate the rate of genetic gain in crop improvement. However, efficient and automatic phenotyping of traits across large populations is a challenge; which is further exacerbated by the necessity of sampling multiple environments and growing replicated trials. A promising approach is to leverage current advances in imaging technology, data analytics and machine learning to enable automated and fast phenotyping and subsequent decision support. In this context, the workflow for phenotyping (image capture → data storage and curation → trait extraction → machine learning/classification → models/apps for decision support) has to be carefully designed and efficiently executed to minimize resource usage and maximize utility. We illustrate such an end-to-end phenotyping workflow for the case of plant stress severity phenotyping in soybean, with a specific focus on the rapid and automatic assessment of iron deficiency chlorosis (IDC) severity on thousands of field plots. We showcase this analytics framework by extracting IDC features from a set of ~4500 unique canopies representing a diverse germplasm base that have different levels of IDC, and subsequently training a variety of classification models to predict plant stress severity. The best classifier is then deployed as a smartphone app for rapid and real time severity rating in the field. We investigated 10 different classification approaches, with the best classifier being a hierarchical classifier with a mean per-class accuracy of ~96%. We construct a phenotypically meaningful 'population canopy graph', connecting the automatically extracted canopy trait features with plant stress severity rating. We incorporated this image capture → image processing → classification workflow into a smartphone app that enables automated real-time evaluation of IDC scores using digital images of the canopy. We expect this high-throughput framework to help increase the rate of genetic gain by providing a robust extendable framework for other abiotic and biotic stresses. We further envision this workflow embedded onto a high throughput phenotyping ground vehicle and unmanned aerial system that will allow real-time, automated stress trait detection and quantification for plant research, breeding and stress scouting applications.

  4. Super-Resolution of Plant Disease Images for the Acceleration of Image-based Phenotyping and Vigor Diagnosis in Agriculture.

    PubMed

    Yamamoto, Kyosuke; Togami, Takashi; Yamaguchi, Norio

    2017-11-06

    Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture-in cooperation with image processing technologies-for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis.

  5. Super-Resolution of Plant Disease Images for the Acceleration of Image-based Phenotyping and Vigor Diagnosis in Agriculture

    PubMed Central

    Togami, Takashi; Yamaguchi, Norio

    2017-01-01

    Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture—in cooperation with image processing technologies—for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis. PMID:29113104

  6. A multi-scale convolutional neural network for phenotyping high-content cellular images.

    PubMed

    Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian

    2017-07-01

    Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  7. Phenotypic characterization of glioblastoma identified through shape descriptors

    NASA Astrophysics Data System (ADS)

    Chaddad, Ahmad; Desrosiers, Christian; Toews, Matthew

    2016-03-01

    This paper proposes quantitatively describing the shape of glioblastoma (GBM) tissue phenotypes as a set of shape features derived from segmentations, for the purposes of discriminating between GBM phenotypes and monitoring tumor progression. GBM patients were identified from the Cancer Genome Atlas, and quantitative MR imaging data were obtained from the Cancer Imaging Archive. Three GBM tissue phenotypes are considered including necrosis, active tumor and edema/invasion. Volumetric tissue segmentations are obtained from registered T1˗weighted (T1˗WI) postcontrast and fluid-attenuated inversion recovery (FLAIR) MRI modalities. Shape features are computed from respective tissue phenotype segmentations, and a Kruskal-Wallis test was employed to select features capable of classification with a significance level of p < 0.05. Several classifier models are employed to distinguish phenotypes, where a leave-one-out cross-validation was performed. Eight features were found statistically significant for classifying GBM phenotypes with p <0.05, orientation is uninformative. Quantitative evaluations show the SVM results in the highest classification accuracy of 87.50%, sensitivity of 94.59% and specificity of 92.77%. In summary, the shape descriptors proposed in this work show high performance in predicting GBM tissue phenotypes. They are thus closely linked to morphological characteristics of GBM phenotypes and could potentially be used in a computer assisted labeling system.

  8. GBM heterogeneity characterization by radiomic analysis of phenotype anatomical planes

    NASA Astrophysics Data System (ADS)

    Chaddad, Ahmad; Desrosiers, Christian; Toews, Matthew

    2016-03-01

    Glioblastoma multiforme (GBM) is the most common malignant primary tumor of the central nervous system, characterized among other traits by rapid metastatis. Three tissue phenotypes closely associated with GBMs, namely, necrosis (N), contrast enhancement (CE), and edema/invasion (E), exhibit characteristic patterns of texture heterogeneity in magnetic resonance images (MRI). In this study, we propose a novel model to characterize GBM tissue phenotypes using gray level co-occurrence matrices (GLCM) in three anatomical planes. The GLCM encodes local image patches in terms of informative, orientation-invariant texture descriptors, which are used here to sub-classify GBM tissue phenotypes. Experiments demonstrate the model on MRI data of 41 GBM patients, obtained from the cancer genome atlas (TCGA). Intensity-based automatic image registration is applied to align corresponding pairs of fixed T1˗weighted (T1˗WI) post-contrast and fluid attenuated inversion recovery (FLAIR) images. GBM tissue regions are then segmented using the 3D Slicer tool. Texture features are computed from 12 quantifier functions operating on GLCM descriptors, that are generated from MRI intensities within segmented GBM tissue regions. Various classifier models are used to evaluate the effectiveness of texture features for discriminating between GBM phenotypes. Results based on T1-WI scans showed a phenotype classification accuracy of over 88.14%, a sensitivity of 85.37% and a specificity of 96.1%, using the linear discriminant analysis (LDA) classifier. This model has the potential to provide important characteristics of tumors, which can be used for the sub-classification of GBM phenotypes.

  9. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions

    PubMed Central

    Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner

    2016-01-01

    In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter. PMID:27983669

  10. Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions.

    PubMed

    Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner

    2016-12-15

    In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.

  11. Machine learning and computer vision approaches for phenotypic profiling.

    PubMed

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  12. Machine learning and computer vision approaches for phenotypic profiling

    PubMed Central

    Morris, Quaid

    2017-01-01

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. PMID:27940887

  13. Automated phenotype pattern recognition of zebrafish for high-throughput screening.

    PubMed

    Schutera, Mark; Dickmeis, Thomas; Mione, Marina; Peravali, Ravindra; Marcato, Daniel; Reischl, Markus; Mikut, Ralf; Pylatiuk, Christian

    2016-07-03

    Over the last years, the zebrafish (Danio rerio) has become a key model organism in genetic and chemical screenings. A growing number of experiments and an expanding interest in zebrafish research makes it increasingly essential to automatize the distribution of embryos and larvae into standard microtiter plates or other sample holders for screening, often according to phenotypical features. Until now, such sorting processes have been carried out by manually handling the larvae and manual feature detection. Here, a prototype platform for image acquisition together with a classification software is presented. Zebrafish embryos and larvae and their features such as pigmentation are detected automatically from the image. Zebrafish of 4 different phenotypes can be classified through pattern recognition at 72 h post fertilization (hpf), allowing the software to classify an embryo into 2 distinct phenotypic classes: wild-type versus variant. The zebrafish phenotypes are classified with an accuracy of 79-99% without any user interaction. A description of the prototype platform and of the algorithms for image processing and pattern recognition is presented.

  14. Computer vision and machine learning for robust phenotyping in genome-wide studies

    PubMed Central

    Zhang, Jiaoping; Naik, Hsiang Sing; Assefa, Teshale; Sarkar, Soumik; Reddy, R. V. Chowda; Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh K.

    2017-01-01

    Traditional evaluation of crop biotic and abiotic stresses are time-consuming and labor-intensive limiting the ability to dissect the genetic basis of quantitative traits. A machine learning (ML)-enabled image-phenotyping pipeline for the genetic studies of abiotic stress iron deficiency chlorosis (IDC) of soybean is reported. IDC classification and severity for an association panel of 461 diverse plant-introduction accessions was evaluated using an end-to-end phenotyping workflow. The workflow consisted of a multi-stage procedure including: (1) optimized protocols for consistent image capture across plant canopies, (2) canopy identification and registration from cluttered backgrounds, (3) extraction of domain expert informed features from the processed images to accurately represent IDC expression, and (4) supervised ML-based classifiers that linked the automatically extracted features with expert-rating equivalent IDC scores. ML-generated phenotypic data were subsequently utilized for the genome-wide association study and genomic prediction. The results illustrate the reliability and advantage of ML-enabled image-phenotyping pipeline by identifying previously reported locus and a novel locus harboring a gene homolog involved in iron acquisition. This study demonstrates a promising path for integrating the phenotyping pipeline into genomic prediction, and provides a systematic framework enabling robust and quicker phenotyping through ground-based systems. PMID:28272456

  15. Etiologic Ischemic Stroke Phenotypes in the NINDS Stroke Genetics Network

    PubMed Central

    Ay, Hakan; Arsava, Ethem Murat; Andsberg, Gunnar; Benner, Thomas; Brown, Robert D.; Chapman, Sherita N.; Cole, John W.; Delavaran, Hossein; Dichgans, Martin; Engström, Gunnar; Giralt-Steinhauer, Eva; Grewal, Raji P.; Gwinn, Katrina; Jern, Christina; Jimenez-Conde, Jordi; Jood, Katarina; Katsnelson, Michael; Kissela, Brett; Kittner, Steven J.; Kleindorfer, Dawn O.; Labovitz, Daniel L.; Lanfranconi, Silvia; Lee, Jin-Moo; Lehm, Manuel; Lemmens, Robin; Levi, Chris; Li, Linxin; Lindgren, Arne; Markus, Hugh S.; McArdle, Patrick F.; Melander, Olle; Norrving, Bo; Peddareddygari, Leema Reddy; Pedersén, Annie; Pera, Joanna; Rannikmäe, Kristiina; Rexrode, Kathryn M.; Rhodes, David; Rich, Stephen S.; Roquer, Jaume; Rosand, Jonathan; Rothwell, Peter M.; Rundek, Tatjana; Sacco, Ralph L.; Schmidt, Reinhold; Schürks, Markus; Seiler, Stephan; Sharma, Pankaj; Slowik, Agnieszka; Sudlow, Cathie; Thijs, Vincent; Woodfield, Rebecca; Worrall, Bradford B.; Meschia, James F.

    2014-01-01

    Background and Purpose NINDS Stroke Genetics Network (SiGN) is an international consortium of ischemic stroke studies that aims to generate high quality phenotype data to identify the genetic basis of etiologic stroke subtypes. This analysis characterizes the etiopathogenetic basis of ischemic stroke and reliability of stroke classification in the consortium. Methods Fifty-two trained and certified adjudicators determined both phenotypic (abnormal test findings categorized in major etiologic groups without weighting towards the most likely cause) and causative ischemic stroke subtypes in 16,954 subjects with imaging-confirmed ischemic stroke from 12 US studies and 11 studies from 8 European countries using the web-based Causative Classification of Stroke System. Classification reliability was assessed with blinded re-adjudication of 1509 randomly selected cases. Results The distribution of etiologic categories varied by study, age, sex, and race (p<0.001 for each). Overall, only 40% to 54% of cases with a given major ischemic stroke etiology (phenotypic subtype) were classified into the same final causative category with high confidence. There was good agreement for both causative (kappa 0.72, 95%CI:0.69-0.75) and phenotypic classifications (kappa 0.73, 95%CI:0.70-0.75). Conclusions This study demonstrates that etiologic subtypes can be determined with good reliability in studies that include investigators with different expertise and background, institutions with different stroke evaluation protocols and geographic location, and patient populations with different epidemiological characteristics. The discordance between phenotypic and causative stroke subtypes highlights the fact that the presence of an abnormality in a stroke patient does not necessarily mean that it is the cause of stroke. PMID:25378430

  16. Multi-modal classification of neurodegenerative disease by progressive graph-based transductive learning

    PubMed Central

    Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Nie, Feiping; Munsell, Brent

    2018-01-01

    Graph-based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter-subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature-to-phenotype alignment is achieved using an iterative approach that: (1) refines inter-subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter-subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi-modal imaging data to further improve pGTL classification accuracy. Using Alzheimer’s disease and Parkinson’s disease study data, the classification accuracy of the proposed pGTL method is compared to several state-of-the-art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets. PMID:28551556

  17. Phenotype classification of single cells using SRS microscopy, RNA sequencing, and microfluidics (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Streets, Aaron M.; Cao, Chen; Zhang, Xiannian; Huang, Yanyi

    2016-03-01

    Phenotype classification of single cells reveals biological variation that is masked in ensemble measurement. This heterogeneity is found in gene and protein expression as well as in cell morphology. Many techniques are available to probe phenotypic heterogeneity at the single cell level, for example quantitative imaging and single-cell RNA sequencing, but it is difficult to perform multiple assays on the same single cell. In order to directly track correlation between morphology and gene expression at the single cell level, we developed a microfluidic platform for quantitative coherent Raman imaging and immediate RNA sequencing (RNA-Seq) of single cells. With this device we actively sort and trap cells for analysis with stimulated Raman scattering microscopy (SRS). The cells are then processed in parallel pipelines for lysis, and preparation of cDNA for high-throughput transcriptome sequencing. SRS microscopy offers three-dimensional imaging with chemical specificity for quantitative analysis of protein and lipid distribution in single cells. Meanwhile, the microfluidic platform facilitates single-cell manipulation, minimizes contamination, and furthermore, provides improved RNA-Seq detection sensitivity and measurement precision, which is necessary for differentiating biological variability from technical noise. By combining coherent Raman microscopy with RNA sequencing, we can better understand the relationship between cellular morphology and gene expression at the single-cell level.

  18. Automated cell analysis tool for a genome-wide RNAi screen with support vector machine based supervised learning

    NASA Astrophysics Data System (ADS)

    Remmele, Steffen; Ritzerfeld, Julia; Nickel, Walter; Hesser, Jürgen

    2011-03-01

    RNAi-based high-throughput microscopy screens have become an important tool in biological sciences in order to decrypt mostly unknown biological functions of human genes. However, manual analysis is impossible for such screens since the amount of image data sets can often be in the hundred thousands. Reliable automated tools are thus required to analyse the fluorescence microscopy image data sets usually containing two or more reaction channels. The herein presented image analysis tool is designed to analyse an RNAi screen investigating the intracellular trafficking and targeting of acylated Src kinases. In this specific screen, a data set consists of three reaction channels and the investigated cells can appear in different phenotypes. The main issue of the image processing task is an automatic cell segmentation which has to be robust and accurate for all different phenotypes and a successive phenotype classification. The cell segmentation is done in two steps by segmenting the cell nuclei first and then using a classifier-enhanced region growing on basis of the cell nuclei to segment the cells. The classification of the cells is realized by a support vector machine which has to be trained manually using supervised learning. Furthermore, the tool is brightness invariant allowing different staining quality and it provides a quality control that copes with typical defects during preparation and acquisition. A first version of the tool has already been successfully applied for an RNAi-screen containing three hundred thousand image data sets and the SVM extended version is designed for additional screens.

  19. Co-occurrence of Local Anisotropic Gradient Orientations (CoLlAGe): A new radiomics descriptor.

    PubMed

    Prasanna, Prateek; Tiwari, Pallavi; Madabhushi, Anant

    2016-11-22

    In this paper, we introduce a new radiomic descriptor, Co-occurrence of Local Anisotropic Gradient Orientations (CoLlAGe) for capturing subtle differences between benign and pathologic phenotypes which may be visually indistinguishable on routine anatomic imaging. CoLlAGe seeks to capture and exploit local anisotropic differences in voxel-level gradient orientations to distinguish similar appearing phenotypes. CoLlAGe involves assigning every image voxel an entropy value associated with the co-occurrence matrix of gradient orientations computed around every voxel. The hypothesis behind CoLlAGe is that benign and pathologic phenotypes even though they may appear similar on anatomic imaging, will differ in their local entropy patterns, in turn reflecting subtle local differences in tissue microarchitecture. We demonstrate CoLlAGe's utility in three clinically challenging classification problems: distinguishing (1) radiation necrosis, a benign yet confounding effect of radiation treatment, from recurrent tumors on T1-w MRI in 42 brain tumor patients, (2) different molecular sub-types of breast cancer on DCE-MRI in 65 studies and (3) non-small cell lung cancer (adenocarcinomas) from benign fungal infection (granulomas) on 120 non-contrast CT studies. For each of these classification problems, CoLlAGE in conjunction with a random forest classifier outperformed state of the art radiomic descriptors (Haralick, Gabor, Histogram of Gradient Orientations).

  20. Automatic classification of cardioembolic and arteriosclerotic ischemic strokes from apparent diffusion coefficient datasets using texture analysis and deep learning

    NASA Astrophysics Data System (ADS)

    Villafruela, Javier; Crites, Sebastian; Cheng, Bastian; Knaack, Christian; Thomalla, Götz; Menon, Bijoy K.; Forkert, Nils D.

    2017-03-01

    Stroke is a leading cause of death and disability in the western hemisphere. Acute ischemic strokes can be broadly classified based on the underlying cause into atherosclerotic strokes, cardioembolic strokes, small vessels disease, and stroke with other causes. The ability to determine the exact origin of an acute ischemic stroke is highly relevant for optimal treatment decision and preventing recurrent events. However, the differentiation of atherosclerotic and cardioembolic phenotypes can be especially challenging due to similar appearance and symptoms. The aim of this study was to develop and evaluate the feasibility of an image-based machine learning approach for discriminating between arteriosclerotic and cardioembolic acute ischemic strokes using 56 apparent diffusion coefficient (ADC) datasets from acute stroke patients. For this purpose, acute infarct lesions were semi-atomically segmented and 30,981 geometric and texture image features were extracted for each stroke volume. To improve the performance and accuracy, categorical Pearson's χ2 test was used to select the most informative features while removing redundant attributes. As a result, only 289 features were finally included for training of a deep multilayer feed-forward neural network without bootstrapping. The proposed method was evaluated using a leave-one-out cross validation scheme. The proposed classification method achieved an average area under receiver operator characteristic curve value of 0.93 and a classification accuracy of 94.64%. These first results suggest that the proposed image-based classification framework can support neurologists in clinical routine differentiating between atherosclerotic and cardioembolic phenotypes.

  1. Time series modeling of live-cell shape dynamics for image-based phenotypic profiling.

    PubMed

    Gordonov, Simon; Hwang, Mun Kyung; Wells, Alan; Gertler, Frank B; Lauffenburger, Douglas A; Bathe, Mark

    2016-01-01

    Live-cell imaging can be used to capture spatio-temporal aspects of cellular responses that are not accessible to fixed-cell imaging. As the use of live-cell imaging continues to increase, new computational procedures are needed to characterize and classify the temporal dynamics of individual cells. For this purpose, here we present the general experimental-computational framework SAPHIRE (Stochastic Annotation of Phenotypic Individual-cell Responses) to characterize phenotypic cellular responses from time series imaging datasets. Hidden Markov modeling is used to infer and annotate morphological state and state-switching properties from image-derived cell shape measurements. Time series modeling is performed on each cell individually, making the approach broadly useful for analyzing asynchronous cell populations. Two-color fluorescent cells simultaneously expressing actin and nuclear reporters enabled us to profile temporal changes in cell shape following pharmacological inhibition of cytoskeleton-regulatory signaling pathways. Results are compared with existing approaches conventionally applied to fixed-cell imaging datasets, and indicate that time series modeling captures heterogeneous dynamic cellular responses that can improve drug classification and offer additional important insight into mechanisms of drug action. The software is available at http://saphire-hcs.org.

  2. Changes in Crohn's disease phenotype over time in the Chinese population: validation of the Montreal classification system.

    PubMed

    Chow, Dorothy K L; Leong, Rupert W L; Lai, Larry H; Wong, Grace L H; Leung, Wai-Keung; Chan, Francis K L; Sung, Joseph J Y

    2008-04-01

    Phenotypic evolution of Crohn's disease occurs in whites but has never been described in other populations. The Montreal classification may describe phenotypes more precisely. The aim of this study was to validate the Montreal classification through a longitudinal sensitivity analysis in detecting phenotypic variation compared to the Vienna classification. This was a retrospective longitudinal study of consecutive Chinese Crohn's disease patients. All cases were classified by the Montreal classification and the Vienna classification for behavior and location. The evolution of these characteristics and the need for surgery were evaluated. A total of 109 patients were recruited (median follow-up: 4 years, range: 6 months-18 years). Crohn's disease behavior changed 3 years after diagnosis (P = 0.025), with an increase in stricturing and penetrating phenotypes, as determined by the Montreal classification, but was only detected by the Vienna classification after 5 years (P = 0.015). Disease location remained stable on follow-up in both classifications. Thirty-four patients (31%) underwent major surgery during the follow-up period with the stricturing [P = 0.002; hazard ratio (HR): 3.3; 95% CI: 1.5-7.0] and penetrating (P = 0.03; HR: 5.8; 95% CI: 1.2-28.2) phenotypes according to the Montreal classification associated with the need for major surgery. In contrast, colonic disease was protective against a major operation (P = 0.02; HR: 0.3; 95% CI: 0.08-0.8). This is the first study demonstrating phenotypic evolution of Crohn's disease in a nonwhite population. The Montreal classification is more sensitive to behavior phenotypic changes than is the Vienna classification after excluding perianal disease from the penetrating disease category and was useful in predicting course and the need for surgery.

  3. Machine Learning methods for Quantitative Radiomic Biomarkers.

    PubMed

    Parmar, Chintan; Grossmann, Patrick; Bussink, Johan; Lambin, Philippe; Aerts, Hugo J W L

    2015-08-17

    Radiomics extracts and mines large number of medical imaging features quantifying tumor phenotypic characteristics. Highly accurate and reliable machine-learning approaches can drive the success of radiomic applications in clinical care. In this radiomic study, fourteen feature selection methods and twelve classification methods were examined in terms of their performance and stability for predicting overall survival. A total of 440 radiomic features were extracted from pre-treatment computed tomography (CT) images of 464 lung cancer patients. To ensure the unbiased evaluation of different machine-learning methods, publicly available implementations along with reported parameter configurations were used. Furthermore, we used two independent radiomic cohorts for training (n = 310 patients) and validation (n = 154 patients). We identified that Wilcoxon test based feature selection method WLCX (stability = 0.84 ± 0.05, AUC = 0.65 ± 0.02) and a classification method random forest RF (RSD = 3.52%, AUC = 0.66 ± 0.03) had highest prognostic performance with high stability against data perturbation. Our variability analysis indicated that the choice of classification method is the most dominant source of performance variation (34.21% of total variance). Identification of optimal machine-learning methods for radiomic applications is a crucial step towards stable and clinically relevant radiomic biomarkers, providing a non-invasive way of quantifying and monitoring tumor-phenotypic characteristics in clinical practice.

  4. Phenotype analysis of early risk factors from electronic medical records improves image-derived diagnostic classifiers for optic nerve pathology

    NASA Astrophysics Data System (ADS)

    Chaganti, Shikha; Nabar, Kunal P.; Nelson, Katrina M.; Mawn, Louise A.; Landman, Bennett A.

    2017-03-01

    We examine imaging and electronic medical records (EMR) of 588 subjects over five major disease groups that affect optic nerve function. An objective evaluation of the role of imaging and EMR data in diagnosis of these conditions would improve understanding of these diseases and help in early intervention. We developed an automated image processing pipeline that identifies the orbital structures within the human eyes from computed tomography (CT) scans, calculates structural size, and performs volume measurements. We customized the EMR-based phenome-wide association study (PheWAS) to derive diagnostic EMR phenotypes that occur at least two years prior to the onset of the conditions of interest from a separate cohort of 28,411 ophthalmology patients. We used random forest classifiers to evaluate the predictive power of image-derived markers, EMR phenotypes, and clinical visual assessments in identifying disease cohorts from a control group of 763 patients without optic nerve disease. Image-derived markers showed more predictive power than clinical visual assessments or EMR phenotypes. However, the addition of EMR phenotypes to the imaging markers improves the classification accuracy against controls: the AUC improved from 0.67 to 0.88 for glaucoma, 0.73 to 0.78 for intrinsic optic nerve disease, 0.72 to 0.76 for optic nerve edema, 0.72 to 0.77 for orbital inflammation, and 0.81 to 0.85 for thyroid eye disease. This study illustrates the importance of diagnostic context for interpretation of image-derived markers and the proposed PheWAS technique provides a flexible approach for learning salient features of patient history and incorporating these data into traditional machine learning analyses.

  5. CellCognition: time-resolved phenotype annotation in high-throughput live cell imaging.

    PubMed

    Held, Michael; Schmitz, Michael H A; Fischer, Bernd; Walter, Thomas; Neumann, Beate; Olma, Michael H; Peter, Matthias; Ellenberg, Jan; Gerlich, Daniel W

    2010-09-01

    Fluorescence time-lapse imaging has become a powerful tool to investigate complex dynamic processes such as cell division or intracellular trafficking. Automated microscopes generate time-resolved imaging data at high throughput, yet tools for quantification of large-scale movie data are largely missing. Here we present CellCognition, a computational framework to annotate complex cellular dynamics. We developed a machine-learning method that combines state-of-the-art classification with hidden Markov modeling for annotation of the progression through morphologically distinct biological states. Incorporation of time information into the annotation scheme was essential to suppress classification noise at state transitions and confusion between different functional states with similar morphology. We demonstrate generic applicability in different assays and perturbation conditions, including a candidate-based RNA interference screen for regulators of mitotic exit in human cells. CellCognition is published as open source software, enabling live-cell imaging-based screening with assays that directly score cellular dynamics.

  6. Noninvasive Detection and Imaging of Molecular Markers in Live Cardiomyocytes Derived from Human Embryonic Stem Cells

    PubMed Central

    Pascut, Flavius C.; Goh, Huey T.; Welch, Nathan; Buttery, Lee D.; Denning, Chris; Notingher, Ioan

    2011-01-01

    Raman microspectroscopy (RMS) was used to detect and image molecular markers specific to cardiomyocytes (CMs) derived from human embryonic stem cells (hESCs). This technique is noninvasive and thus can be used to discriminate individual live CMs within highly heterogeneous cell populations. Principal component analysis (PCA) of the Raman spectra was used to build a classification model for identification of individual CMs. Retrospective immunostaining imaging was used as the gold standard for phenotypic identification of each cell. We were able to discriminate CMs from other phenotypes with >97% specificity and >96% sensitivity, as calculated with the use of cross-validation algorithms (target 100% specificity). A comparison between Raman spectral images corresponding to selected Raman bands identified by the PCA model and immunostaining of the same cells allowed assignment of the Raman spectral markers. We conclude that glycogen is responsible for the discrimination of CMs, whereas myofibril proteins have a lesser contribution. This study demonstrates the potential of RMS for allowing the noninvasive phenotypic identification of hESC progeny. With further development, such label-free optical techniques may enable the separation of high-purity cell populations with mature phenotypes, and provide repeated measurements to monitor time-dependent molecular changes in live hESCs during differentiation in vitro. PMID:21190678

  7. Characterization and classification of zebrafish brain morphology mutants

    PubMed Central

    Lowery, Laura Anne; De Rienzo, Gianluca; Gutzman, Jennifer H.; Sive, Hazel

    2010-01-01

    The mechanisms by which the vertebrate brain achieves its three-dimensional structure are clearly complex, requiring the functions of many genes. Using the zebrafish as a model, we have begun to define genes required for brain morphogenesis, including brain ventricle formation, by studying 16 mutants previously identified as having embryonic brain morphology defects. We report the phenotypic characterization of these mutants at several time-points, using brain ventricle dye injection, imaging, and immunohistochemistry with neuronal markers. Most of these mutants display early phenotypes, affecting initial brain shaping, while others show later phenotypes, affecting brain ventricle expansion. In the early phenotype group, we further define four phenotypic classes and corresponding functions required for brain morphogenesis. Although we did not use known genotypes for this classification, basing it solely on phenotypes, many mutants with defects in functionally related genes clustered in a single class. In particular, class 1 mutants show midline separation defects, corresponding to epithelial junction defects; class 2 mutants show reduced brain ventricle size; class 3 mutants show midbrain-hindbrain abnormalities, corresponding to basement membrane defects; and class 4 mutants show absence of ventricle lumen inflation, corresponding to defective ion pumping. Later brain ventricle expansion requires the extracellular matrix, cardiovascular circulation, and transcription/splicing-dependent events. We suggest that these mutants define processes likely to be used during brain morphogenesis throughout the vertebrates. PMID:19051268

  8. Whole Organism High-Content Screening by Label-Free, Image-Based Bayesian Classification for Parasitic Diseases

    PubMed Central

    Paveley, Ross A.; Mansour, Nuha R.; Hallyburton, Irene; Bleicher, Leo S.; Benn, Alex E.; Mikic, Ivana; Guidi, Alessandra; Gilbert, Ian H.; Hopkins, Andrew L.; Bickle, Quentin D.

    2012-01-01

    Sole reliance on one drug, Praziquantel, for treatment and control of schistosomiasis raises concerns about development of widespread resistance, prompting renewed interest in the discovery of new anthelmintics. To discover new leads we designed an automated label-free, high content-based, high throughput screen (HTS) to assess drug-induced effects on in vitro cultured larvae (schistosomula) using bright-field imaging. Automatic image analysis and Bayesian prediction models define morphological damage, hit/non-hit prediction and larval phenotype characterization. Motility was also assessed from time-lapse images. In screening a 10,041 compound library the HTS correctly detected 99.8% of the hits scored visually. A proportion of these larval hits were also active in an adult worm ex-vivo screen and are the subject of ongoing studies. The method allows, for the first time, screening of large compound collections against schistosomes and the methods are adaptable to other whole organism and cell-based screening by morphology and motility phenotyping. PMID:22860151

  9. Quantitative radiomic profiling of glioblastoma represents transcriptomic expression.

    PubMed

    Kong, Doo-Sik; Kim, Junhyung; Ryu, Gyuha; You, Hye-Jin; Sung, Joon Kyung; Han, Yong Hee; Shin, Hye-Mi; Lee, In-Hee; Kim, Sung-Tae; Park, Chul-Kee; Choi, Seung Hong; Choi, Jeong Won; Seol, Ho Jun; Lee, Jung-Il; Nam, Do-Hyun

    2018-01-19

    Quantitative imaging biomarkers have increasingly emerged in the field of research utilizing available imaging modalities. We aimed to identify good surrogate radiomic features that can represent genetic changes of tumors, thereby establishing noninvasive means for predicting treatment outcome. From May 2012 to June 2014, we retrospectively identified 65 patients with treatment-naïve glioblastoma with available clinical information from the Samsung Medical Center data registry. Preoperative MR imaging data were obtained for all 65 patients with primary glioblastoma. A total of 82 imaging features including first-order statistics, volume, and size features, were semi-automatically extracted from structural and physiologic images such as apparent diffusion coefficient and perfusion images. Using commercially available software, NordicICE, we performed quantitative imaging analysis and collected the dataset composed of radiophenotypic parameters. Unsupervised clustering methods revealed that the radiophenotypic dataset was composed of three clusters. Each cluster represented a distinct molecular classification of glioblastoma; classical type, proneural and neural types, and mesenchymal type. These clusters also reflected differential clinical outcomes. We found that extracted imaging signatures does not represent copy number variation and somatic mutation. Quantitative radiomic features provide a potential evidence to predict molecular phenotype and treatment outcome. Radiomic profiles represents transcriptomic phenotypes more well.

  10. Classification of High Intensity Zones of the Lumbar Spine and Their Association with Other Spinal MRI Phenotypes: The Wakayama Spine Study.

    PubMed

    Teraguchi, Masatoshi; Samartzis, Dino; Hashizume, Hiroshi; Yamada, Hiroshi; Muraki, Shigeyuki; Oka, Hiroyuki; Cheung, Jason Pui Yin; Kagotani, Ryohei; Iwahashi, Hiroki; Tanaka, Sakae; Kawaguchi, Hiroshi; Nakamura, Kozo; Akune, Toru; Cheung, Kenneth Man-Chee; Yoshimura, Noriko; Yoshida, Munehito

    2016-01-01

    High intensity zones (HIZ) of the lumbar spine are a phenotype of the intervertebral disc noted on MRI whose clinical relevance has been debated. Traditionally, T2-weighted (T2W) magnetic resonance imaging (MRI) has been utilized to identify HIZ of lumbar discs. However, controversy exists with regards to HIZ morphology, topography, and association with other MRI spinal phenotypes. Moreover, classification of HIZ has not been thoroughly defined in the past and the use of additional imaging parameters (e.g. T1W MRI) to assist in defining this phenotype has not been addressed. A cross-sectional study of 814 (69.8% females) subjects with mean age of 63.6 years from a homogenous Japanese population was performed. T2W and T1W sagittal 1.5T MRI was obtained on all subjects to assess HIZ from L1-S1. We created a morphological and topographical HIZ classification based on disc level, shape type (round, fissure, vertical, rim, and enlarged), location within the disc (posterior, anterior), and signal type on T1W MRI (low, high and iso intensity) in comparison to the typical high intensity on T2W MRI. HIZ was noted in 38.0% of subjects. Of these, the prevalence of posterior, anterior, and both posterior/anterior HIZ in the overall lumbar spine were 47.3%, 42.4%, and 10.4%, respectively. Posterior HIZ was most common, occurring at L4/5 (32.5%) and L5/S1 (47.0%), whereas anterior HIZ was most common at L3/4 (41.8%). T1W iso-intensity type of HIZ was most prevalent (71.8%), followed by T1W high-intensity (21.4%) and T1W low-intensity (6.8%). Of all discs, round types were most prevalent (anterior: 3.6%, posterior: 3.7%) followed by vertical type (posterior: 1.6%). At all affected levels, there was a significant association between HIZ and disc degeneration, disc bulge/protrusion and Modic type II (p<0.01). Posterior HIZ and T1W high-intensity type of HIZ were significantly associated with disc bulge/protrusion and disc degeneration (p<0.01). In addition, posterior HIZ was significantly associated with Modic type II and III. T1W low-intensity type of HIZ was significantly associated with Modic type II. This is the first large-scale study reporting a novel classification scheme of HIZ of the lumbar spine. This study is the first that has utilized T2W and T1W MRIs in differentiating HIZ sub-phenotypes. Specific HIZ sub-phenotypes were found to be more associated with specific MRI degenerative changes. With a more detailed description of the HIZ phenotype, this scheme can be standardized for future clinical and research initiatives.

  11. HCS-Neurons: identifying phenotypic changes in multi-neuron images upon drug treatments of high-content screening.

    PubMed

    Charoenkwan, Phasit; Hwang, Eric; Cutler, Robert W; Lee, Hua-Chin; Ko, Li-Wei; Huang, Hui-Ling; Ho, Shinn-Ying

    2013-01-01

    High-content screening (HCS) has become a powerful tool for drug discovery. However, the discovery of drugs targeting neurons is still hampered by the inability to accurately identify and quantify the phenotypic changes of multiple neurons in a single image (named multi-neuron image) of a high-content screen. Therefore, it is desirable to develop an automated image analysis method for analyzing multi-neuron images. We propose an automated analysis method with novel descriptors of neuromorphology features for analyzing HCS-based multi-neuron images, called HCS-neurons. To observe multiple phenotypic changes of neurons, we propose two kinds of descriptors which are neuron feature descriptor (NFD) of 13 neuromorphology features, e.g., neurite length, and generic feature descriptors (GFDs), e.g., Haralick texture. HCS-neurons can 1) automatically extract all quantitative phenotype features in both NFD and GFDs, 2) identify statistically significant phenotypic changes upon drug treatments using ANOVA and regression analysis, and 3) generate an accurate classifier to group neurons treated by different drug concentrations using support vector machine and an intelligent feature selection method. To evaluate HCS-neurons, we treated P19 neurons with nocodazole (a microtubule depolymerizing drug which has been shown to impair neurite development) at six concentrations ranging from 0 to 1000 ng/mL. The experimental results show that all the 13 features of NFD have statistically significant difference with respect to changes in various levels of nocodazole drug concentrations (NDC) and the phenotypic changes of neurites were consistent to the known effect of nocodazole in promoting neurite retraction. Three identified features, total neurite length, average neurite length, and average neurite area were able to achieve an independent test accuracy of 90.28% for the six-dosage classification problem. This NFD module and neuron image datasets are provided as a freely downloadable MatLab project at http://iclab.life.nctu.edu.tw/HCS-Neurons. Few automatic methods focus on analyzing multi-neuron images collected from HCS used in drug discovery. We provided an automatic HCS-based method for generating accurate classifiers to classify neurons based on their phenotypic changes upon drug treatments. The proposed HCS-neurons method is helpful in identifying and classifying chemical or biological molecules that alter the morphology of a group of neurons in HCS.

  12. Associations between pituitary imaging abnormalities and clinical and biochemical phenotypes in children with congenital growth hormone deficiency: data from an international observational study.

    PubMed

    Deal, Cheri; Hasselmann, Caroline; Pfäffle, Roland W; Zimmermann, Alan G; Quigley, Charmian A; Child, Christopher J; Shavrikova, Elena P; Cutler, Gordon B; Blum, Werner F

    2013-01-01

    Magnetic resonance imaging (MRI) is used to investigate the etiology of growth hormone deficiency (GHD). This study examined relationships between MRI findings and clinical/hormonal phenotypes in children with GHD in the observational Genetics and Neuroendocrinology of Short Stature International Study, GeNeSIS. Clinical presentation, hormonal status and first-year GH response were compared between patients with pituitary imaging abnormalities (n = 1,071), patients with mutations in genes involved in pituitary development/GH secretion (n = 120) and patients with idiopathic GHD (n = 7,039). Patients with hypothalamic-pituitary abnormalities had more severe phenotypes than patients with idiopathic GHD. Additional hormonal deficiencies were found in 35% of patients with structural abnormalities (thyroid-stimulating hormone > adrenocorticotropic hormone > luteinizing hormone/follicle-stimulating hormone > antidiuretic hormone), most frequently in patients with septo-optic dysplasia (SOD). Patients with the triad [ectopic posterior pituitary (EPP), pituitary aplasia/hypoplasia and stalk defects] had a more severe phenotype and better response to GH treatment than patients with isolated abnormalities. The sex ratio was approximately equal for patients with SOD, but there was a significantly higher proportion of males (approximately 70%) in the EPP, pituitary hypoplasia, stalk defects, and triad categories. This large, international database demonstrates the value of classification of GH-deficient patients by the presence and type of hypothalamic-pituitary imaging abnormalities. This information may assist family counseling and patient management. Copyright © 2013 S. Karger AG, Basel.

  13. Deep Learning in Label-free Cell Classification

    PubMed Central

    Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; Blaby, Ian K.; Huang, Allen; Niazi, Kayvan Reza; Jalali, Bahram

    2016-01-01

    Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells. PMID:26975219

  14. Deep Learning in Label-free Cell Classification

    NASA Astrophysics Data System (ADS)

    Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; Blaby, Ian K.; Huang, Allen; Niazi, Kayvan Reza; Jalali, Bahram

    2016-03-01

    Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.

  15. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    PubMed

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.

  16. A complete system for 3D reconstruction of roots for phenotypic analysis.

    PubMed

    Kumar, Pankaj; Cai, Jinhai; Miklavcic, Stanley J

    2015-01-01

    Here we present a complete system for 3D reconstruction of roots grown in a transparent gel medium or washed and suspended in water. The system is capable of being fully automated as it is self calibrating. The system starts with detection of root tips in root images from an image sequence generated by a turntable motion. Root tips are detected using the statistics of Zernike moments on image patches centred on high curvature points on root boundary and Bayes classification rule. The detected root tips are tracked in the image sequence using a multi-target tracking algorithm. Conics are fitted to the root tip trajectories using a novel ellipse fitting algorithm which weighs the data points by its eccentricity. The conics projected from the circular trajectory have a complex conjugate intersection which are image of the circular points. Circular points constraint the image of the absolute conics which are directly related to the internal parameters of the camera. The pose of the camera is computed from the image of the rotation axis and the horizon. The silhouettes of the roots and camera parameters are used to reconstruction the 3D voxel model of the roots. We show the results of real 3D reconstruction of roots which are detailed and realistic for phenotypic analysis.

  17. Hepatocellular Adenoma: Evaluation with Contrast-Enhanced Ultrasound and MRI and Correlation with Pathologic and Phenotypic Classification in 26 Lesions

    PubMed Central

    Manichon, Anne-Frédérique; Bancel, Brigitte; Durieux-Millon, Marion; Ducerf, Christian; Mabrut, Jean-Yves; Lepogam, Marie-Annick; Rode, Agnès

    2012-01-01

    Purpose. To review the contrast-enhanced ultrasonographic (CEUS) and magnetic resonance (MR) imaging findings in 25 patients with 26 hepatocellular adenomas (HCAs) and to compare imaging features with histopathologic results from resected specimen considering the new immunophenotypical classification. Material and Methods. Two abdominal radiologists reviewed retrospectively CEUS cineloops and MR images in 26 HCA. All pathological specimens were reviewed and classified into four subgroups (steatotic or HNF 1α mutated, inflammatory, atypical or β-catenin mutated, and unspecified). Inflammatory infiltrates were scored, steatosis, and telangiectasia semiquantitatively evaluated. Results. CEUS and MRI features are well correlated: among the 16 inflammatory HCA, 7/16 presented typical imaging features: hypersignal T2, strong arterial enhancement with a centripetal filling, persistent on delayed phase. 6 HCA were classified as steatotic with typical imaging features: a drop out signal, slight arterial enhancement, vanishing on late phase. Four HCA were classified as atypical with an HCC developed in one. Five lesions displayed important steatosis (>50%) without belonging to the HNF1α group. Conclusion. In half cases, inflammatory HCA have specific imaging features well correlated with the amount of telangiectasia and inflammatory infiltrates. An HCA with important amount of steatosis noticed on chemical shift images does not always belong to the HNF1α group. PMID:22811588

  18. Grouping patients for masseter muscle genotype-phenotype studies.

    PubMed

    Moawad, Hadwah Abdelmatloub; Sinanan, Andrea C M; Lewis, Mark P; Hunt, Nigel P

    2012-03-01

    To use various facial classifications, including either/both vertical and horizontal facial criteria, to assess their effects on the interpretation of masseter muscle (MM) gene expression. Fresh MM biopsies were obtained from 29 patients (age, 16-36 years) with various facial phenotypes. Based on clinical and cephalometric analysis, patients were grouped using three different classifications: (1) basic vertical, (2) basic horizontal, and (3) combined vertical and horizontal. Gene expression levels of the myosin heavy chain genes MYH1, MYH2, MYH3, MYH6, MYH7, and MYH8 were recorded using quantitative reverse transcriptase polymerase chain reaction (RT-PCR) and were related to the various classifications. The significance level for statistical analysis was set at P ≤ .05. Using classification 1, none of the MYH genes were found to be significantly different between long face (LF) patients and the average vertical group. Using classification 2, MYH3, MYH6, and MYH7 genes were found to be significantly upregulated in retrognathic patients compared with prognathic and average horizontal groups. Using classification 3, only the MYH7 gene was found to be significantly upregulated in retrognathic LF compared with prognathic LF, prognathic average vertical faces, and average vertical and horizontal groups. The use of basic vertical or basic horizontal facial classifications may not be sufficient for genetics-based studies of facial phenotypes. Prognathic and retrognathic facial phenotypes have different MM gene expressions; therefore, it is not recommended to combine them into one single group, even though they may have a similar vertical facial phenotype.

  19. IRIS COLOUR CLASSIFICATION SCALES – THEN AND NOW

    PubMed Central

    Grigore, Mariana; Avram, Alina

    2015-01-01

    Eye colour is one of the most obvious phenotypic traits of an individual. Since the first documented classification scale developed in 1843, there have been numerous attempts to classify the iris colour. In the past centuries, iris colour classification scales has had various colour categories and mostly relied on comparison of an individual’s eye with painted glass eyes. Once photography techniques were refined, standard iris photographs replaced painted eyes, but this did not solve the problem of painted/ printed colour variability in time. Early clinical scales were easy to use, but lacked objectivity and were not standardised or statistically tested for reproducibility. The era of automated iris colour classification systems came with the technological development. Spectrophotometry, digital analysis of high-resolution iris images, hyper spectral analysis of the human real iris and the dedicated iris colour analysis software, all accomplished an objective, accurate iris colour classification, but are quite expensive and limited in use to research environment. Iris colour classification systems evolved continuously due to their use in a wide range of studies, especially in the fields of anthropology, epidemiology and genetics. Despite the wide range of the existing scales, up until present there has been no generally accepted iris colour classification scale. PMID:27373112

  20. IRIS COLOUR CLASSIFICATION SCALES--THEN AND NOW.

    PubMed

    Grigore, Mariana; Avram, Alina

    2015-01-01

    Eye colour is one of the most obvious phenotypic traits of an individual. Since the first documented classification scale developed in 1843, there have been numerous attempts to classify the iris colour. In the past centuries, iris colour classification scales has had various colour categories and mostly relied on comparison of an individual's eye with painted glass eyes. Once photography techniques were refined, standard iris photographs replaced painted eyes, but this did not solve the problem of painted/ printed colour variability in time. Early clinical scales were easy to use, but lacked objectivity and were not standardised or statistically tested for reproducibility. The era of automated iris colour classification systems came with the technological development. Spectrophotometry, digital analysis of high-resolution iris images, hyper spectral analysis of the human real iris and the dedicated iris colour analysis software, all accomplished an objective, accurate iris colour classification, but are quite expensive and limited in use to research environment. Iris colour classification systems evolved continuously due to their use in a wide range of studies, especially in the fields of anthropology, epidemiology and genetics. Despite the wide range of the existing scales, up until present there has been no generally accepted iris colour classification scale.

  1. Baseline Gray- and White Matter Volume Predict Successful Weight Loss in the Elderly

    PubMed Central

    Mokhtari, Fatemeh; Paolini, Brielle M.; Burdette, Jonathan H.; Marsh, Anthony P.; Rejeski, W. Jack; Laurienti, Paul J.

    2016-01-01

    Objective The purpose of this study is to investigate if structural brain phenotypes can be used to predict weight loss success following behavioral interventions in older adults that are overweight or obese and have cardiometabolic dysfunction. Methods A support vector machine (SVM) with a repeated random subsampling validation approach was used to classify participants into the upper and lower halves of the weight loss distribution following 18 months of a weight loss intervention. Predictions were based on baseline brain gray matter (GM) and white matter (WM) volume from 52 individuals that completed the intervention and a magnetic resonance imaging session. Results The SVM resulted in an average classification accuracy of 72.62 % based on GM and WM volume. A receiver operating characteristic analysis indicated that classification performance was robust based on an area under the curve of 0.82. Conclusions Our findings suggest that baseline brain structure is able to predict weight loss success following 18 months of treatment. The identification of brain structure as a predictor of successful weight loss is an innovative approach to identifying phenotypes for responsiveness to intensive lifestyle interventions. This phenotype could prove useful in future research focusing on the tailoring of treatment for weight loss. PMID:27804273

  2. Deep Learning in Label-free Cell Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia

    Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individualmore » cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. In conclusion, this system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.« less

  3. Deep Learning in Label-free Cell Classification

    DOE PAGES

    Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; ...

    2016-03-15

    Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individualmore » cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. In conclusion, this system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.« less

  4. Robust Classification of Small-Molecule Mechanism of Action Using a Minimalist High-Content Microscopy Screen and Multidimensional Phenotypic Trajectory Analysis

    PubMed Central

    Twarog, Nathaniel R.; Low, Jonathan A.; Currier, Duane G.; Miller, Greg; Chen, Taosheng; Shelat, Anang A.

    2016-01-01

    Phenotypic screening through high-content automated microscopy is a powerful tool for evaluating the mechanism of action of candidate therapeutics. Despite more than a decade of development, however, high content assays have yielded mixed results, identifying robust phenotypes in only a small subset of compound classes. This has led to a combinatorial explosion of assay techniques, analyzing cellular phenotypes across dozens of assays with hundreds of measurements. Here, using a minimalist three-stain assay and only 23 basic cellular measurements, we developed an analytical approach that leverages informative dimensions extracted by linear discriminant analysis to evaluate similarity between the phenotypic trajectories of different compounds in response to a range of doses. This method enabled us to visualize biologically-interpretable phenotypic tracks populated by compounds of similar mechanism of action, cluster compounds according to phenotypic similarity, and classify novel compounds by comparing them to phenotypically active exemplars. Hierarchical clustering applied to 154 compounds from over a dozen different mechanistic classes demonstrated tight agreement with published compound mechanism classification. Using 11 phenotypically active mechanism classes, classification was performed on all 154 compounds: 78% were correctly identified as belonging to one of the 11 exemplar classes or to a different unspecified class, with accuracy increasing to 89% when less phenotypically active compounds were excluded. Importantly, several apparent clustering and classification failures, including rigosertib and 5-fluoro-2’-deoxycytidine, instead revealed more complex mechanisms or off-target effects verified by more recent publications. These results show that a simple, easily replicated, minimalist high-content assay can reveal subtle variations in the cellular phenotype induced by compounds and can correctly predict mechanism of action, as long as the appropriate analytical tools are used. PMID:26886014

  5. Robust Classification of Small-Molecule Mechanism of Action Using a Minimalist High-Content Microscopy Screen and Multidimensional Phenotypic Trajectory Analysis.

    PubMed

    Twarog, Nathaniel R; Low, Jonathan A; Currier, Duane G; Miller, Greg; Chen, Taosheng; Shelat, Anang A

    2016-01-01

    Phenotypic screening through high-content automated microscopy is a powerful tool for evaluating the mechanism of action of candidate therapeutics. Despite more than a decade of development, however, high content assays have yielded mixed results, identifying robust phenotypes in only a small subset of compound classes. This has led to a combinatorial explosion of assay techniques, analyzing cellular phenotypes across dozens of assays with hundreds of measurements. Here, using a minimalist three-stain assay and only 23 basic cellular measurements, we developed an analytical approach that leverages informative dimensions extracted by linear discriminant analysis to evaluate similarity between the phenotypic trajectories of different compounds in response to a range of doses. This method enabled us to visualize biologically-interpretable phenotypic tracks populated by compounds of similar mechanism of action, cluster compounds according to phenotypic similarity, and classify novel compounds by comparing them to phenotypically active exemplars. Hierarchical clustering applied to 154 compounds from over a dozen different mechanistic classes demonstrated tight agreement with published compound mechanism classification. Using 11 phenotypically active mechanism classes, classification was performed on all 154 compounds: 78% were correctly identified as belonging to one of the 11 exemplar classes or to a different unspecified class, with accuracy increasing to 89% when less phenotypically active compounds were excluded. Importantly, several apparent clustering and classification failures, including rigosertib and 5-fluoro-2'-deoxycytidine, instead revealed more complex mechanisms or off-target effects verified by more recent publications. These results show that a simple, easily replicated, minimalist high-content assay can reveal subtle variations in the cellular phenotype induced by compounds and can correctly predict mechanism of action, as long as the appropriate analytical tools are used.

  6. Effect of phenotype on health care costs in Crohn's disease: A European study using the Montreal classification.

    PubMed

    Odes, Selwyn; Vardi, Hillel; Friger, Michael; Wolters, Frank; Hoie, Ole; Moum, Bjørn; Bernklev, Tomm; Yona, Hagit; Russel, Maurice; Munkholm, Pia; Langholz, Ebbe; Riis, Lene; Politi, Patrizia; Bondini, Paolo; Tsianos, Epameinondas; Katsanos, Kostas; Clofent, Juan; Vermeire, Severine; Freitas, João; Mouzas, Iannis; Limonard, Charles; O'Morain, Colm; Monteiro, Estela; Fornaciari, Giovanni; Vatn, Morten; Stockbrugger, Reinhold

    2007-12-01

    Crohn's disease (CD) is a chronic inflammation of the gastrointestinal tract associated with life-long high health care costs. We aimed to determine the effect of disease phenotype on cost. Clinical and economic data of a community-based CD cohort with 10-year follow-up were analyzed retrospectively in relation to Montreal classification phenotypes. In 418 patients, mean total costs of health care for the behavior phenotypes were: nonstricturing-nonpenetrating 1690, stricturing 2081, penetrating 3133 and penetrating-with-perianal-fistula 3356 €/patient-phenotype-year (P<0.001), and mean costs of surgical hospitalization 215, 751, 1293 and 1275 €/patient-phenotype-year respectively (P<0.001). Penetrating-with-perianal-fistula patients incurred significantly greater expenses than penetrating patients for total care, diagnosis and drugs, but not surgical hospitalization. Total costs were similar in the location phenotypes: ileum 1893, colon 1748, ileo-colonic 2010 and upper gastrointestinal tract 1758 €/patient-phenotype-year, but surgical hospitalization costs differed significantly, 558, 209, 492 and 542 €/patient-phenotype-year respectively (P<0.001). By multivariate analysis, the behavior phenotype significantly impacted total, medical and surgical hospitalization costs, whereas the location phenotype affected only surgical costs. Younger age at diagnosis predicted greater surgical expenses. Behavior is the dominant phenotype driving health care cost. Use of the Montreal classification permits detection of cost differences caused by perianal fistula.

  7. Detection by hyperspectral imaging of shiga toxin-producing Escherichia coli serogroups O26, O45, O103, O111, O121, and O145 on rainbow agar.

    PubMed

    Windham, William R; Yoon, Seung-Chul; Ladely, Scott R; Haley, Jennifer A; Heitschmidt, Jerry W; Lawrence, Kurt C; Park, Bosoon; Narrang, Neelam; Cray, William C

    2013-07-01

    The U.S. Department of Agriculture, Food Safety Inspection Service has determined that six non-O157 Shiga toxin-producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) are adulterants in raw beef. Isolate and phenotypic discrimination of non-O157 STEC is problematic due to the lack of suitable agar media. The lack of distinct phenotypic color variation among non-O157serogroups cultured on chromogenic agar poses a challenge in selecting colonies for confirmation. In this study, visible and near-infrared hyperspectral imaging and chemometrics were used to detect and classify non-O157 STEC serogroups grown on Rainbow agar O157. The method was first developed by building spectral libraries for each serogroup obtained from ground-truth regions of interest representing the true identity of each pixel and thus each pure culture colony in the hyperspectral agar-plate image. The spectral library for the pure-culture non-O157 STEC consisted of 2,171 colonies, with spectra derived from 124,347 of pixels. The classification models for each serogroup were developed with a k nearest-neighbor classifier. The overall classification training accuracy at the colony level was 99%. The classifier was validated with ground beef enrichments artificially inoculated with 10, 50, and 100 CFU/ml STEC. The validation ground-truth regions of interest of the STEC target colonies consisted of 606 colonies, with 3,030 pixels of spectra. The overall classification accuracy was 98%. The average specificity of the method was 98% due to the low false-positive rate of 1.2%. The sensitivity ranged from 78 to 100% due to the false-negative rates of 22, 7, and 8% for O145, O45, and O26, respectively. This study showed the potential of visible and near-infrared hyperspectral imaging for detecting and classifying colonies of the six non-O157 STEC serogroups. The technique needs to be validated with bacterial cultures directly extracted from meat products and positive identification of colonies by using confirmatory tests such as latex agglutination tests or PCR.

  8. Harmonizing the Classification of Age-related Macular Degeneration in the Three Continent AMD Consortium

    PubMed Central

    Klein, Ronald; Meuer, Stacy M.; Myers, Chelsea E.; Buitendijk, Gabriëlle H. S.; Rochtchina, Elena; Choudhury, Farzana; de Jong, Paulus T. V. M.; McKean-Cowdin, Roberta; Iyengar, Sudha K.; Gao, Xiaoyi; Lee, Kristine E.; Vingerling, Johannes R.; Mitchell, Paul; Klaver, Caroline C. W.; Wang, Jie Jin; Klein, Barbara E. K.

    2014-01-01

    Purpose To describe methods to harmonize the classification of age-related macular degeneration (AMD) phenotypes across four population-based cohort studies: the Beaver Dam Eye Study (BDES), Blue Mountains Eye Study (BMES), Los Angeles Latino Eye Study (LALES), and Rotterdam Study (RS). Methods AMD grading protocols, definitions of categories, and grading forms from each study were compared to determine whether there were systematic differences in AMD severity definitions and lesion categorization among the three grading centers. Each center graded the same set of 60 images using their respective systems to determine presence and severity of AMD lesions. A common five-step AMD severity scale and definitions of lesion measurement cutpoints and early and late AMD were developed from this exercise. Results Applying this severity scale changed the age-sex adjusted prevalence of early AMD from 18.7% to 20.3% in BDES, from 4.7% to 14.4% in BMES, from 14.1% to 15.8% in LALES, and from 7.5% to 17.1% in RS. Age-sex adjusted prevalences of late AMD remained unchanged. Comparison of each center’s grades of the 60 images converted to the consortium scale showed that exact agreement of AMD severity among centers varied from 61.0% to 81.4%, and one-step agreement varied from 84.7% to 98.3%. Conclusion Harmonization of AMD classification reduced categorical differences in phenotypic definitions across the studies, resulted in a new 5-step AMD severity scale, and enhanced similarity of AMD prevalence among four cohorts. Despite harmonization it may still be difficult to remove systematic differences in grading, if present. PMID:24467558

  9. Dynamic species classification of microorganisms across time, abiotic and biotic environments—A sliding window approach

    PubMed Central

    Griffiths, Jason I.; Fronhofer, Emanuel A.; Garnier, Aurélie; Seymour, Mathew; Altermatt, Florian; Petchey, Owen L.

    2017-01-01

    The development of video-based monitoring methods allows for rapid, dynamic and accurate monitoring of individuals or communities, compared to slower traditional methods, with far reaching ecological and evolutionary applications. Large amounts of data are generated using video-based methods, which can be effectively processed using machine learning (ML) algorithms into meaningful ecological information. ML uses user defined classes (e.g. species), derived from a subset (i.e. training data) of video-observed quantitative features (e.g. phenotypic variation), to infer classes in subsequent observations. However, phenotypic variation often changes due to environmental conditions, which may lead to poor classification, if environmentally induced variation in phenotypes is not accounted for. Here we describe a framework for classifying species under changing environmental conditions based on the random forest classification. A sliding window approach was developed that restricts temporal and environmentally conditions to improve the classification. We tested our approach by applying the classification framework to experimental data. The experiment used a set of six ciliate species to monitor changes in community structure and behavior over hundreds of generations, in dozens of species combinations and across a temperature gradient. Differences in biotic and abiotic conditions caused simplistic classification approaches to be unsuccessful. In contrast, the sliding window approach allowed classification to be highly successful, as phenotypic differences driven by environmental change, could be captured by the classifier. Importantly, classification using the random forest algorithm showed comparable success when validated against traditional, slower, manual identification. Our framework allows for reliable classification in dynamic environments, and may help to improve strategies for long-term monitoring of species in changing environments. Our classification pipeline can be applied in fields assessing species community dynamics, such as eco-toxicology, ecology and evolutionary ecology. PMID:28472193

  10. Biomedical visual data analysis to build an intelligent diagnostic decision support system in medical genetics.

    PubMed

    Kuru, Kaya; Niranjan, Mahesan; Tunca, Yusuf; Osvank, Erhan; Azim, Tayyaba

    2014-10-01

    In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype-phenotype interrelation is possible. However, determining correct genotype-phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype-phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5-9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p<0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes. Copyright © 2014. Published by Elsevier B.V.

  11. Advances in Management of Esophageal Motility Disorders.

    PubMed

    Kahrilas, Peter J; Bredenoord, Albert J; Carlson, Dustin A; Pandolfino, John E

    2018-04-24

    The widespread adoption of high-resolution manometry (HRM) has led to a restructuring in the classification of esophageal motility disorder classification summarized in the Chicago Classification, currently in version 3.0. It has become apparent that the cardinal feature of achalasia, impaired lower esophageal sphincter relaxation, can occur in several disease phenotypes: without peristalsis, with premature (spastic) distal esophageal contractions, with panesophageal pressurization, or even with preserved peristalsis. Furthermore, despite these advances in diagnostics, no single manometric pattern is perfectly sensitive or specific for idiopathic achalasia and complimentary assessments with provocative maneuvers during HRM or interrogating the esophagogastric junction with the functional luminal imaging probe during endoscopy can be useful in clarifying equivocal or inexplicable HRM findings. Using these tools, we have come to conceptualize esophageal motility disorders as characterized by obstructive physiology at the esophagogastric junction, smooth muscle esophagus, or both. Recognizing obstructive physiology as a primary target of therapy has become particularly relevant with the development of a minimally invasive technique for performing a calibrated myotomy of the esophageal circular muscle, the POEM procedure. Now and going forward, optimal management is to render treatment in a phenotype-specific manner: e.g. POEM calibrated to patient-specific physiology for spastic achalasia and spastic disorders of the smooth muscle esophagus, more conservative strategies (pneumatic dilation) for the disorders limited to the sphincter. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.

  12. Grape colour phenotyping: development of a method based on the reflectance spectrum.

    PubMed

    Rustioni, Laura; Basilico, Roberto; Fiori, Simone; Leoni, Alessandra; Maghradze, David; Failla, Osvaldo

    2013-01-01

    The colour of fruit is an important quality factor for cultivar classification and phenotyping techniques. Besides the subjective visual evaluation, new instruments and techniques can be used. This work aims at developping an objective, fast, easy and non-destructive method as a useful support for evaluating grapes' colour under different cultural and environmental conditions, as well as for breeding process and germplasm evaluation, supporting the plant characterization and the biodiversity preservation. Colours of 120 grape varieties were studied using reflectance spectra. The classification was realized using cluster and discriminant analysis. Reflectance of the whole berries surface was also compared with absorption properties of single skin extracts. A phenotyping method based on the reflectance spectra was developed, producing reliable colour classifications. A cultivar-independent index for pigment content evaluation has also been obtained. This work allowed the classification of the berry colour using an objective method. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Leaf epidermis images for robust identification of plants

    PubMed Central

    da Silva, Núbia Rosa; Oliveira, Marcos William da Silva; Filho, Humberto Antunes de Almeida; Pinheiro, Luiz Felipe Souza; Rossatto, Davi Rodrigo; Kolb, Rosana Marta; Bruno, Odemir Martinez

    2016-01-01

    This paper proposes a methodology for plant analysis and identification based on extracting texture features from microscopic images of leaf epidermis. All the experiments were carried out using 32 plant species with 309 epidermal samples captured by an optical microscope coupled to a digital camera. The results of the computational methods using texture features were compared to the conventional approach, where quantitative measurements of stomatal traits (density, length and width) were manually obtained. Epidermis image classification using texture has achieved a success rate of over 96%, while success rate was around 60% for quantitative measurements taken manually. Furthermore, we verified the robustness of our method accounting for natural phenotypic plasticity of stomata, analysing samples from the same species grown in different environments. Texture methods were robust even when considering phenotypic plasticity of stomatal traits with a decrease of 20% in the success rate, as quantitative measurements proved to be fully sensitive with a decrease of 77%. Results from the comparison between the computational approach and the conventional quantitative measurements lead us to discover how computational systems are advantageous and promising in terms of solving problems related to Botany, such as species identification. PMID:27217018

  14. Multiparametric MRI characterization and prediction in autism spectrum disorder using graph theory and machine learning.

    PubMed

    Zhou, Yongxia; Yu, Fang; Duong, Timothy

    2014-01-01

    This study employed graph theory and machine learning analysis of multiparametric MRI data to improve characterization and prediction in autism spectrum disorders (ASD). Data from 127 children with ASD (13.5±6.0 years) and 153 age- and gender-matched typically developing children (14.5±5.7 years) were selected from the multi-center Functional Connectome Project. Regional gray matter volume and cortical thickness increased, whereas white matter volume decreased in ASD compared to controls. Small-world network analysis of quantitative MRI data demonstrated decreased global efficiency based on gray matter cortical thickness but not with functional connectivity MRI (fcMRI) or volumetry. An integrative model of 22 quantitative imaging features was used for classification and prediction of phenotypic features that included the autism diagnostic observation schedule, the revised autism diagnostic interview, and intelligence quotient scores. Among the 22 imaging features, four (caudate volume, caudate-cortical functional connectivity and inferior frontal gyrus functional connectivity) were found to be highly informative, markedly improving classification and prediction accuracy when compared with the single imaging features. This approach could potentially serve as a biomarker in prognosis, diagnosis, and monitoring disease progression.

  15. Quantitative chromatin pattern description in Feulgen-stained nuclei as a diagnostic tool to characterize the oligodendroglial and astroglial components in mixed oligo-astrocytomas.

    PubMed

    Decaestecker, C; Lopes, B S; Gordower, L; Camby, I; Cras, P; Martin, J J; Kiss, R; VandenBerg, S R; Salmon, I

    1997-04-01

    The oligoastrocytoma, as a mixed glioma, represents a nosologic dilemma with respect to precisely defining the oligodendroglial and astroglial phenotypes that constitute the neoplastic cell lineages of these tumors. In this study, cell image analysis with Feulgen-stained nuclei was used to distinguish between oligodendroglial and astrocytic phenotypes in oligodendrogliomas and astrocytomas and then applied to mixed oligoastrocytomas. Quantitative features with respect to chromatin pattern (30 variables) and DNA ploidy (8 variables) were evaluated on Feulgen-stained nuclei in a series of 71 gliomas using computer-assisted microscopy. These included 32 oligodendrogliomas (OLG group: 24 grade II and 8 grade III tumors according to the WHO classification), 32 astrocytomas (AST group: 13 grade II and 19 grade III tumors), and 7 oligoastrocytomas (OLGAST group). Initially, image analysis with multivariate statistical analyses (Discriminant Analysis) could identify each glial tumor group. Highly significant statistical differences were obtained distinguishing the morphonuclear features of oligodendrogliomas from those of astrocytomas, regardless of their histological grade. When compared with the 7 mixed oligoastrocytomas under study, 5 exhibited DNA ploidy and chromatin pattern characteristics similar to grade II oligodendrogliomas, I to grade III oligodendrogliomas, and I to grade II astrocytomas. Using multifactorial statistical analyses (Discriminant Analysis combined with Principal Component Analysis). It was possible to quantify the proportion of "typical" glial cell phenotypes that compose grade II and III oligodendrogliomas and grade II and III astrocytomas in each mixed glioma. Cytometric image analysis may be an important adjunct to routine histopathology for the reproducible identification of neoplasms containing a mixture of oligodendroglial and astrocytic phenotypes.

  16. Stratification of pseudoprogression and true progression of glioblastoma multiform based on longitudinal diffusion tensor imaging without segmentation

    PubMed Central

    Qian, Xiaohua; Tan, Hua; Zhang, Jian; Zhao, Weilin; Chan, Michael D.; Zhou, Xiaobo

    2016-01-01

    Purpose: Pseudoprogression (PsP) can mimic true tumor progression (TTP) on magnetic resonance imaging in patients with glioblastoma multiform (GBM). The phenotypical similarity between PsP and TTP makes it a challenging task for physicians to distinguish these entities. So far, no approved biomarkers or computer-aided diagnosis systems have been used clinically for this purpose. Methods: To address this challenge, the authors developed an objective classification system for PsP and TTP based on longitudinal diffusion tensor imaging. A novel spatio-temporal discriminative dictionary learning scheme was proposed to differentiate PsP and TTP, thereby avoiding segmentation of the region of interest. The authors constructed a novel discriminative sparse matrix with the classification-oriented dictionary learning approach by excluding the shared features of two categories, so that the pooled features captured the subtle difference between PsP and TTP. The most discriminating features were then identified from the pooled features by their feature scoring system. Finally, the authors stratified patients with GBM into PsP and TTP by a support vector machine approach. Tenfold cross-validation (CV) and the area under the receiver operating characteristic (AUC) were used to assess the robustness of the developed system. Results: The average accuracy and AUC values after ten rounds of tenfold CV were 0.867 and 0.92, respectively. The authors also assessed the effects of different methods and factors (such as data types, pooling techniques, and dimensionality reduction approaches) on the performance of their classification system which obtained the best performance. Conclusions: The proposed objective classification system without segmentation achieved a desirable and reliable performance in differentiating PsP from TTP. Thus, the developed approach is expected to advance the clinical research and diagnosis of PsP and TTP. PMID:27806598

  17. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.

    PubMed

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried; De Vos, Winnok H

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.

  18. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification

    PubMed Central

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows. PMID:28125723

  19. Muscle segmentation in time series images of Drosophila metamorphosis.

    PubMed

    Yadav, Kuleesha; Lin, Feng; Wasser, Martin

    2015-01-01

    In order to study genes associated with muscular disorders, we characterize the phenotypic changes in Drosophila muscle cells during metamorphosis caused by genetic perturbations. We collect in vivo images of muscle fibers during remodeling of larval to adult muscles. In this paper, we focus on the new image processing pipeline designed to quantify the changes in shape and size of muscles. We propose a new two-step approach to muscle segmentation in time series images. First, we implement a watershed algorithm to divide the image into edge-preserving regions, and then, we classify these regions into muscle and non-muscle classes on the basis of shape and intensity. The advantage of our method is two-fold: First, better results are obtained because classification of regions is constrained by the shape of muscle cell from previous time point; and secondly, minimal user intervention results in faster processing time. The segmentation results are used to compare the changes in cell size between controls and reduction of the autophagy related gene Atg 9 during Drosophila metamorphosis.

  20. Spectral imaging perspective on cytomics.

    PubMed

    Levenson, Richard M

    2006-07-01

    Cytomics involves the analysis of cellular morphology and molecular phenotypes, with reference to tissue architecture and to additional metadata. To this end, a variety of imaging and nonimaging technologies need to be integrated. Spectral imaging is proposed as a tool that can simplify and enrich the extraction of morphological and molecular information. Simple-to-use instrumentation is available that mounts on standard microscopes and can generate spectral image datasets with excellent spatial and spectral resolution; these can be exploited by sophisticated analysis tools. This report focuses on brightfield microscopy-based approaches. Cytological and histological samples were stained using nonspecific standard stains (Giemsa; hematoxylin and eosin (H&E)) or immunohistochemical (IHC) techniques employing three chromogens plus a hematoxylin counterstain. The samples were imaged using the Nuance system, a commercially available, liquid-crystal tunable-filter-based multispectral imaging platform. The resulting data sets were analyzed using spectral unmixing algorithms and/or learn-by-example classification tools. Spectral unmixing of Giemsa-stained guinea-pig blood films readily classified the major blood elements. Machine-learning classifiers were also successful at the same task, as well in distinguishing normal from malignant regions in a colon-cancer example, and in delineating regions of inflammation in an H&E-stained kidney sample. In an example of a multiplexed ICH sample, brown, red, and blue chromogens were isolated into separate images without crosstalk or interference from the (also blue) hematoxylin counterstain. Cytomics requires both accurate architectural segmentation as well as multiplexed molecular imaging to associate molecular phenotypes with relevant cellular and tissue compartments. Multispectral imaging can assist in both these tasks, and conveys new utility to brightfield-based microscopy approaches. Copyright 2006 International Society for Analytical Cytology.

  1. Diagnostic index: an open-source tool to classify TMJ OA condyles

    NASA Astrophysics Data System (ADS)

    Paniagua, Beatriz; Pascal, Laura; Prieto, Juan; Vimort, Jean Baptiste; Gomes, Liliane; Yatabe, Marilia; Ruellas, Antonio Carlos; Budin, Francois; Pieper, Steve; Styner, Martin; Benavides, Erika; Cevidanes, Lucia

    2017-03-01

    Osteoarthritis (OA) of temporomandibular joints (TMJ) occurs in about 40% of the patients who present TMJ disorders. Despite its prevalence, OA diagnosis and treatment remain controversial since there are no clear symptoms of the disease, especially in early stages. Quantitative tools based on 3D imaging of the TMJ condyle have the potential to help characterize TMJ OA changes. The goals of the tools proposed in this study are to ultimately develop robust imaging markers for diagnosis and assessment of treatment efficacy. This work proposes to identify differences among asymptomatic controls and different clinical phenotypes of TMJ OA by means of Statistical Shape Modeling (SSM), obtained via clinical expert consensus. From three different grouping schemes (with 3, 5 and 7 groups), our best results reveal that that the majority (74.5%) of the classifications occur in agreement with the groups assigned by consensus between our clinical experts. Our findings suggest the existence of different disease-based phenotypic morphologies in TMJ OA. Our preliminary findings with statistical shape modeling based biomarkers may provide a quantitative staging of the disease. The methodology used in this study is included in an open source image analysis toolbox, to ensure reproducibility and appropriate distribution and dissemination of the solution proposed.

  2. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  3. Immunological classification of high grade non-Hodgkin's lymphomas (NHL) in children.

    PubMed

    Pituch-Noworolska, A; Miezyński, W

    1994-01-01

    The immunological classification of 28 high grade non-Hodgkin's lymphomas (NHL) in children was shown. The morphological classification was based on Working Formulation, the immunological classification--on acute lymphoblastic leukemia subtypes. The phenotypes were assayed cytofluorometrically with monoclonal antibodies and compared to ontogenic stages in B and T cell development. Small non-cleaved cell lymphoma (Burkitt's type) was seen in 13 patients, lymphoblastic lymphoma in 12 patients, low differentiated in 3 patients. Immunological classification showed B-lymphocyte origin of blast cells in 15 patients including 11 small non-cleaved Burkitt's lymphoma (mature B and cALL phenotype), 3 undifferentiated cases (pro-B and mature B cell) and 1 case of lymphoblastic lymphoma (cALL type). T-cell origin of blast cells was demonstrated in 13 patients. The immunological classification used routinely was helpful in selection of patients with unfavourable prognosis. The more precise description of blast cells was valuable for better adjustment of therapy and better prognosis.

  4. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    NASA Astrophysics Data System (ADS)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  5. Phenotyping: Using Machine Learning for Improved Pairwise Genotype Classification Based on Root Traits

    PubMed Central

    Zhao, Jiangsan; Bodner, Gernot; Rewald, Boris

    2016-01-01

    Phenotyping local crop cultivars is becoming more and more important, as they are an important genetic source for breeding – especially in regard to inherent root system architectures. Machine learning algorithms are promising tools to assist in the analysis of complex data sets; novel approaches are need to apply them on root phenotyping data of mature plants. A greenhouse experiment was conducted in large, sand-filled columns to differentiate 16 European Pisum sativum cultivars based on 36 manually derived root traits. Through combining random forest and support vector machine models, machine learning algorithms were successfully used for unbiased identification of most distinguishing root traits and subsequent pairwise cultivar differentiation. Up to 86% of pea cultivar pairs could be distinguished based on top five important root traits (Timp5) – Timp5 differed widely between cultivar pairs. Selecting top important root traits (Timp) provided a significant improved classification compared to using all available traits or randomly selected trait sets. The most frequent Timp of mature pea cultivars was total surface area of lateral roots originating from tap root segments at 0–5 cm depth. The high classification rate implies that culturing did not lead to a major loss of variability in root system architecture in the studied pea cultivars. Our results illustrate the potential of machine learning approaches for unbiased (root) trait selection and cultivar classification based on rather small, complex phenotypic data sets derived from pot experiments. Powerful statistical approaches are essential to make use of the increasing amount of (root) phenotyping information, integrating the complex trait sets describing crop cultivars. PMID:27999587

  6. Guidelines on severity assessment and classification of genetically altered mouse and rat lines.

    PubMed

    Zintzsch, Anne; Noe, Elena; Reißmann, Monika; Ullmann, Kristina; Krämer, Stephanie; Jerchow, Boris; Kluge, Reinhart; Gösele, Claudia; Nickles, Hannah; Puppe, Astrid; Rülicke, Thomas

    2017-12-01

    Genetic alterations can unpredictably compromise the wellbeing of animals. Thus, more or less harmful phenotypes might appear in the animals used in research projects even when they are not subjected to experimental treatments. The severity classification of suffering has become an important issue since the implementation of Directive 2010/63/EU on the protection of animals used for scientific purposes. Accordingly, the breeding and maintenance of genetically altered (GA) animals which are likely to develop a harmful phenotype has to be authorized. However, a determination of the degree of severity is rather challenging due to the large variety of phenotypes. Here, the Working Group of Berlin Animal Welfare Officers (WG Berlin AWO) provides field-tested guidelines on severity assessment and classification of GA rodents. With a focus on basic welfare assessment and severity classification we provide a list of symptoms that have been classified as non-harmful, mild, moderate or severe burdens. Corresponding monitoring and refinement strategies as well as specific housing requirements have been compiled and are strongly recommended to improve hitherto applied breeding procedures and conditions. The document serves as a guide to determine the degree of severity for an observed phenotype. The aim is to support scientists, animal care takers, animal welfare bodies and competent authorities with this task, and thereby make an important contribution to a European harmonization of severity assessments for the continually increasing number of GA rodents.

  7. A Review of Imaging Techniques for Plant Phenotyping

    PubMed Central

    Li, Lei; Zhang, Qin; Huang, Danfeng

    2014-01-01

    Given the rapid development of plant genomic technologies, a lack of access to plant phenotyping capabilities limits our ability to dissect the genetics of quantitative traits. Effective, high-throughput phenotyping platforms have recently been developed to solve this problem. In high-throughput phenotyping platforms, a variety of imaging methodologies are being used to collect data for quantitative studies of complex traits related to the growth, yield and adaptation to biotic or abiotic stress (disease, insects, drought and salinity). These imaging techniques include visible imaging (machine vision), imaging spectroscopy (multispectral and hyperspectral remote sensing), thermal infrared imaging, fluorescence imaging, 3D imaging and tomographic imaging (MRT, PET and CT). This paper presents a brief review on these imaging techniques and their applications in plant phenotyping. The features used to apply these imaging techniques to plant phenotyping are described and discussed in this review. PMID:25347588

  8. Cell of origin associated classification of B-cell malignancies by gene signatures of the normal B-cell hierarchy.

    PubMed

    Johnsen, Hans Erik; Bergkvist, Kim Steve; Schmitz, Alexander; Kjeldsen, Malene Krag; Hansen, Steen Møller; Gaihede, Michael; Nørgaard, Martin Agge; Bæch, John; Grønholdt, Marie-Louise; Jensen, Frank Svendsen; Johansen, Preben; Bødker, Julie Støve; Bøgsted, Martin; Dybkær, Karen

    2014-06-01

    Recent findings have suggested biological classification of B-cell malignancies as exemplified by the "activated B-cell-like" (ABC), the "germinal-center B-cell-like" (GCB) and primary mediastinal B-cell lymphoma (PMBL) subtypes of diffuse large B-cell lymphoma and "recurrent translocation and cyclin D" (TC) classification of multiple myeloma. Biological classification of B-cell derived cancers may be refined by a direct and systematic strategy where identification and characterization of normal B-cell differentiation subsets are used to define the cancer cell of origin phenotype. Here we propose a strategy combining multiparametric flow cytometry, global gene expression profiling and biostatistical modeling to generate B-cell subset specific gene signatures from sorted normal human immature, naive, germinal centrocytes and centroblasts, post-germinal memory B-cells, plasmablasts and plasma cells from available lymphoid tissues including lymph nodes, tonsils, thymus, peripheral blood and bone marrow. This strategy will provide an accurate image of the stage of differentiation, which prospectively can be used to classify any B-cell malignancy and eventually purify tumor cells. This report briefly describes the current models of the normal B-cell subset differentiation in multiple tissues and the pathogenesis of malignancies originating from the normal germinal B-cell hierarchy.

  9. WND-CHARM: Multi-purpose image classification using compound image transforms

    PubMed Central

    Orlov, Nikita; Shamir, Lior; Macura, Tomasz; Johnston, Josiah; Eckley, D. Mark; Goldberg, Ilya G.

    2008-01-01

    We describe a multi-purpose image classifier that can be applied to a wide variety of image classification tasks without modifications or fine-tuning, and yet provide classification accuracy comparable to state-of-the-art task-specific image classifiers. The proposed image classifier first extracts a large set of 1025 image features including polynomial decompositions, high contrast features, pixel statistics, and textures. These features are computed on the raw image, transforms of the image, and transforms of transforms of the image. The feature values are then used to classify test images into a set of pre-defined image classes. This classifier was tested on several different problems including biological image classification and face recognition. Although we cannot make a claim of universality, our experimental results show that this classifier performs as well or better than classifiers developed specifically for these image classification tasks. Our classifier’s high performance on a variety of classification problems is attributed to (i) a large set of features extracted from images; and (ii) an effective feature selection and weighting algorithm sensitive to specific image classification problems. The algorithms are available for free download from openmicroscopy.org. PMID:18958301

  10. An update on classification, genetics, and clinical approach to mixed phenotype acute leukemia (MPAL).

    PubMed

    Khan, Maliha; Siddiqi, Rabbia; Naqvi, Kiran

    2018-06-01

    Mixed phenotype acute leukemia (MPAL) is an uncommon diagnosis, representing only about 2-5% of acute leukemia cases. The blast cells of MPAL express multilineage immunophenotypic markers and may have a shared B/T/myeloid phenotype. Due to historical ambiguity in the diagnosis of MPAL, the genetics and clinical features of this disease remain poorly characterized. Based on the 2008 and 2016 World Health Organization classifications, myeloid lineage is best determined by presence of myeloperoxidase, while B and T lymphoid lineages are demonstrated by CD19 and cytoplasmic CD3 expression. MPAL typically carries a worse prognosis than either acute myeloid leukemia (AML) or acute lymphoid leukemia (ALL). Given the rarity of MPAL, there is a lack of prospective trial data to guide therapy; treatment generally relies on ALL-like regimens followed by consolidation chemotherapy or hematopoietic stem cell transplant (HSCT). Here, we review the updated classification, biology, clinical features, and treatment approach to MPAL.

  11. MR Imaging Radiomics Signatures for Predicting the Risk of Breast Cancer Recurrence as Given by Research Versions of MammaPrint, Oncotype DX, and PAM50 Gene Assays.

    PubMed

    Li, Hui; Zhu, Yitan; Burnside, Elizabeth S; Drukker, Karen; Hoadley, Katherine A; Fan, Cheng; Conzen, Suzanne D; Whitman, Gary J; Sutton, Elizabeth J; Net, Jose M; Ganott, Marie; Huang, Erich; Morris, Elizabeth A; Perou, Charles M; Ji, Yuan; Giger, Maryellen L

    2016-11-01

    Purpose To investigate relationships between computer-extracted breast magnetic resonance (MR) imaging phenotypes with multigene assays of MammaPrint, Oncotype DX, and PAM50 to assess the role of radiomics in evaluating the risk of breast cancer recurrence. Materials and Methods Analysis was conducted on an institutional review board-approved retrospective data set of 84 deidentified, multi-institutional breast MR examinations from the National Cancer Institute Cancer Imaging Archive, along with clinical, histopathologic, and genomic data from The Cancer Genome Atlas. The data set of biopsy-proven invasive breast cancers included 74 (88%) ductal, eight (10%) lobular, and two (2%) mixed cancers. Of these, 73 (87%) were estrogen receptor positive, 67 (80%) were progesterone receptor positive, and 19 (23%) were human epidermal growth factor receptor 2 positive. For each case, computerized radiomics of the MR images yielded computer-extracted tumor phenotypes of size, shape, margin morphology, enhancement texture, and kinetic assessment. Regression and receiver operating characteristic analysis were conducted to assess the predictive ability of the MR radiomics features relative to the multigene assay classifications. Results Multiple linear regression analyses demonstrated significant associations (R 2 = 0.25-0.32, r = 0.5-0.56, P < .0001) between radiomics signatures and multigene assay recurrence scores. Important radiomics features included tumor size and enhancement texture, which indicated tumor heterogeneity. Use of radiomics in the task of distinguishing between good and poor prognosis yielded area under the receiver operating characteristic curve values of 0.88 (standard error, 0.05), 0.76 (standard error, 0.06), 0.68 (standard error, 0.08), and 0.55 (standard error, 0.09) for MammaPrint, Oncotype DX, PAM50 risk of relapse based on subtype, and PAM50 risk of relapse based on subtype and proliferation, respectively, with all but the latter showing statistical difference from chance. Conclusion Quantitative breast MR imaging radiomics shows promise for image-based phenotyping in assessing the risk of breast cancer recurrence. © RSNA, 2016 Online supplemental material is available for this article.

  12. All-passive pixel super-resolution of time-stretch imaging

    PubMed Central

    Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.

    2017-01-01

    Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936

  13. Objected-oriented remote sensing image classification method based on geographic ontology model

    NASA Astrophysics Data System (ADS)

    Chu, Z.; Liu, Z. J.; Gu, H. Y.

    2016-11-01

    Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.

  14. Updated Histologic Classification of Adenomas and Carcinomas in the Colon of Carcinogen-treated Sprague-Dawley Rats.

    PubMed

    Rubio, Carlos A

    2017-12-01

    Recent studies have disclosed novel histological phenotypes of colon tumours in carcinogen-treated rats. The aim of this study was to update the current histological classification of colonic neoplasias in Sprague-Dawley (SD) rats. Archival sections from 398 SD rats having 408 neoplasias in previous experiments were re-evaluated. Of the 408 colonic neoplasias, 11% (44/408) were adenomas without invasive growth and 89% (364/408) invasive carcinomas. Out of the 44 adenomas, 82% were conventional (tubular or villous), 14% traditional serrated (TSA; with unlocked serrations or with closed microtubules) and 5% gut-associated lymphoid tissue (GALT)-associated adenomas. Out of 364 carcinomas, 57% were conventional carcinomas, 26% GALT carcinomas, 8% undifferentiated, 6% signet-ring cell carcinomas, and 4% traditional serrated carcinomas (TSC). Thus, conventional adenomas, conventional carcinomas and GALT-associated carcinomas predominated (p<0.05). The updated classification of colonic tumours in SD rats includes conventional adenomas, TSA, GALT-associated adenomas, conventional carcinomas, TSC, GALT-associated carcinomas, signet-ring cell carcinomas and undifferentiated carcinomas. Several of the histological phenotypes reported here are not included in any of the current classifications of colonic tumours in rodents. This updated classification fulfils the requirements for an animal model of human disease, inasmuch as similar histological phenotypes of colon neoplasias have been documented in humans. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  15. FlyBase: genes and gene models

    PubMed Central

    Drysdale, Rachel A.; Crosby, Madeline A.

    2005-01-01

    FlyBase (http://flybase.org) is the primary repository of genetic and molecular data of the insect family Drosophilidae. For the most extensively studied species, Drosophila melanogaster, a wide range of data are presented in integrated formats. Data types include mutant phenotypes, molecular characterization of mutant alleles and aberrations, cytological maps, wild-type expression patterns, anatomical images, transgenic constructs and insertions, sequence-level gene models and molecular classification of gene product functions. There is a growing body of data for other Drosophila species; this is expected to increase dramatically over the next year, with the completion of draft-quality genomic sequences of an additional 11 Drosphila species. PMID:15608223

  16. The phenotypic manifestations of rare genic CNVs in autism spectrum disorder

    PubMed Central

    Merikangas, A K; Segurado, R; Heron, E A; Anney, R J L; Paterson, A D; Cook, E H; Pinto, D; Scherer, S W; Szatmari, P; Gill, M; Corvin, A P; Gallagher, L

    2015-01-01

    Significant evidence exists for the association between copy number variants (CNVs) and Autism Spectrum Disorder (ASD); however, most of this work has focused solely on the diagnosis of ASD. There is limited understanding of the impact of CNVs on the ‘sub-phenotypes' of ASD. The objective of this paper is to evaluate associations between CNVs in differentially brain expressed (DBE) genes or genes previously implicated in ASD/intellectual disability (ASD/ID) and specific sub-phenotypes of ASD. The sample consisted of 1590 cases of European ancestry from the Autism Genome Project (AGP) with a diagnosis of an ASD and at least one rare CNV impacting any gene and a core set of phenotypic measures, including symptom severity, language impairments, seizures, gait disturbances, intelligence quotient (IQ) and adaptive function, as well as paternal and maternal age. Classification analyses using a non-parametric recursive partitioning method (random forests) were employed to define sets of phenotypic characteristics that best classify the CNV-defined groups. There was substantial variation in the classification accuracy of the two sets of genes. The best variables for classification were verbal IQ for the ASD/ID genes, paternal age at birth for the DBE genes and adaptive function for de novo CNVs. CNVs in the ASD/ID list were primarily associated with communication and language domains, whereas CNVs in DBE genes were related to broader manifestations of adaptive function. To our knowledge, this is the first study to examine the associations between sub-phenotypes and CNVs genome-wide in ASD. This work highlights the importance of examining the diverse sub-phenotypic manifestations of CNVs in ASD, including the specific features, comorbid conditions and clinical correlates of ASD that comprise underlying characteristics of the disorder. PMID:25421404

  17. The phenotypic manifestations of rare genic CNVs in autism spectrum disorder.

    PubMed

    Merikangas, A K; Segurado, R; Heron, E A; Anney, R J L; Paterson, A D; Cook, E H; Pinto, D; Scherer, S W; Szatmari, P; Gill, M; Corvin, A P; Gallagher, L

    2015-11-01

    Significant evidence exists for the association between copy number variants (CNVs) and Autism Spectrum Disorder (ASD); however, most of this work has focused solely on the diagnosis of ASD. There is limited understanding of the impact of CNVs on the 'sub-phenotypes' of ASD. The objective of this paper is to evaluate associations between CNVs in differentially brain expressed (DBE) genes or genes previously implicated in ASD/intellectual disability (ASD/ID) and specific sub-phenotypes of ASD. The sample consisted of 1590 cases of European ancestry from the Autism Genome Project (AGP) with a diagnosis of an ASD and at least one rare CNV impacting any gene and a core set of phenotypic measures, including symptom severity, language impairments, seizures, gait disturbances, intelligence quotient (IQ) and adaptive function, as well as paternal and maternal age. Classification analyses using a non-parametric recursive partitioning method (random forests) were employed to define sets of phenotypic characteristics that best classify the CNV-defined groups. There was substantial variation in the classification accuracy of the two sets of genes. The best variables for classification were verbal IQ for the ASD/ID genes, paternal age at birth for the DBE genes and adaptive function for de novo CNVs. CNVs in the ASD/ID list were primarily associated with communication and language domains, whereas CNVs in DBE genes were related to broader manifestations of adaptive function. To our knowledge, this is the first study to examine the associations between sub-phenotypes and CNVs genome-wide in ASD. This work highlights the importance of examining the diverse sub-phenotypic manifestations of CNVs in ASD, including the specific features, comorbid conditions and clinical correlates of ASD that comprise underlying characteristics of the disorder.

  18. Systematic review of autosomal recessive ataxias and proposal for a classification.

    PubMed

    Beaudin, Marie; Klein, Christopher J; Rouleau, Guy A; Dupré, Nicolas

    2017-01-01

    The classification of autosomal recessive ataxias represents a significant challenge because of high genetic heterogeneity and complex phenotypes. We conducted a comprehensive systematic review of the literature to examine all recessive ataxias in order to propose a new classification and properly circumscribe this field as new technologies are emerging for comprehensive targeted gene testing. We searched Pubmed and Embase to identify original articles on recessive forms of ataxia in humans for which a causative gene had been identified. Reference lists and public databases, including OMIM and GeneReviews, were also reviewed. We evaluated the clinical descriptions to determine if ataxia was a core feature of the phenotype and assessed the available evidence on the genotype-phenotype association. Included disorders were classified as primary recessive ataxias, as other complex movement or multisystem disorders with prominent ataxia, or as disorders that may occasionally present with ataxia. After removal of duplicates, 2354 references were reviewed and assessed for inclusion. A total of 130 articles were completely reviewed and included in this qualitative analysis. The proposed new list of autosomal recessive ataxias includes 45 gene-defined disorders for which ataxia is a core presenting feature. We propose a clinical algorithm based on the associated symptoms. We present a new classification for autosomal recessive ataxias that brings awareness to their complex phenotypes while providing a unified categorization of this group of disorders. This review should assist in the development of a consensus nomenclature useful in both clinical and research applications.

  19. A web-based system for neural network based classification in temporomandibular joint osteoarthritis.

    PubMed

    de Dumast, Priscille; Mirabel, Clément; Cevidanes, Lucia; Ruellas, Antonio; Yatabe, Marilia; Ioshida, Marcos; Ribera, Nina Tubau; Michoud, Loic; Gomes, Liliane; Huang, Chao; Zhu, Hongtu; Muniz, Luciana; Shoukri, Brandon; Paniagua, Beatriz; Styner, Martin; Pieper, Steve; Budin, Francois; Vimort, Jean-Baptiste; Pascal, Laura; Prieto, Juan Carlos

    2018-07-01

    The purpose of this study is to describe the methodological innovations of a web-based system for storage, integration and computation of biomedical data, using a training imaging dataset to remotely compute a deep neural network classifier of temporomandibular joint osteoarthritis (TMJOA). This study imaging dataset consisted of three-dimensional (3D) surface meshes of mandibular condyles constructed from cone beam computed tomography (CBCT) scans. The training dataset consisted of 259 condyles, 105 from control subjects and 154 from patients with diagnosis of TMJ OA. For the image analysis classification, 34 right and left condyles from 17 patients (39.9 ± 11.7 years), who experienced signs and symptoms of the disease for less than 5 years, were included as the testing dataset. For the integrative statistical model of clinical, biological and imaging markers, the sample consisted of the same 17 test OA subjects and 17 age and sex matched control subjects (39.4 ± 15.4 years), who did not show any sign or symptom of OA. For these 34 subjects, a standardized clinical questionnaire, blood and saliva samples were also collected. The technological methodologies in this study include a deep neural network classifier of 3D condylar morphology (ShapeVariationAnalyzer, SVA), and a flexible web-based system for data storage, computation and integration (DSCI) of high dimensional imaging, clinical, and biological data. The DSCI system trained and tested the neural network, indicating 5 stages of structural degenerative changes in condylar morphology in the TMJ with 91% close agreement between the clinician consensus and the SVA classifier. The DSCI remotely ran with a novel application of a statistical analysis, the Multivariate Functional Shape Data Analysis, that computed high dimensional correlations between shape 3D coordinates, clinical pain levels and levels of biological markers, and then graphically displayed the computation results. The findings of this study demonstrate a comprehensive phenotypic characterization of TMJ health and disease at clinical, imaging and biological levels, using novel flexible and versatile open-source tools for a web-based system that provides advanced shape statistical analysis and a neural network based classification of temporomandibular joint osteoarthritis. Published by Elsevier Ltd.

  20. Land Cover Classification in a Complex Urban-Rural Landscape with Quickbird Imagery

    PubMed Central

    Moran, Emilio Federico.

    2010-01-01

    High spatial resolution images have been increasingly used for urban land use/cover classification, but the high spectral variation within the same land cover, the spectral confusion among different land covers, and the shadow problem often lead to poor classification performance based on the traditional per-pixel spectral-based classification methods. This paper explores approaches to improve urban land cover classification with Quickbird imagery. Traditional per-pixel spectral-based supervised classification, incorporation of textural images and multispectral images, spectral-spatial classifier, and segmentation-based classification are examined in a relatively new developing urban landscape, Lucas do Rio Verde in Mato Grosso State, Brazil. This research shows that use of spatial information during the image classification procedure, either through the integrated use of textural and spectral images or through the use of segmentation-based classification method, can significantly improve land cover classification performance. PMID:21643433

  1. Co-clustering phenome–genome for phenotype classification and disease gene discovery

    PubMed Central

    Hwang, TaeHyun; Atluri, Gowtham; Xie, MaoQiang; Dey, Sanjoy; Hong, Changjin; Kumar, Vipin; Kuang, Rui

    2012-01-01

    Understanding the categorization of human diseases is critical for reliably identifying disease causal genes. Recently, genome-wide studies of abnormal chromosomal locations related to diseases have mapped >2000 phenotype–gene relations, which provide valuable information for classifying diseases and identifying candidate genes as drug targets. In this article, a regularized non-negative matrix tri-factorization (R-NMTF) algorithm is introduced to co-cluster phenotypes and genes, and simultaneously detect associations between the detected phenotype clusters and gene clusters. The R-NMTF algorithm factorizes the phenotype–gene association matrix under the prior knowledge from phenotype similarity network and protein–protein interaction network, supervised by the label information from known disease classes and biological pathways. In the experiments on disease phenotype–gene associations in OMIM and KEGG disease pathways, R-NMTF significantly improved the classification of disease phenotypes and disease pathway genes compared with support vector machines and Label Propagation in cross-validation on the annotated phenotypes and genes. The newly predicted phenotypes in each disease class are highly consistent with human phenotype ontology annotations. The roles of the new member genes in the disease pathways are examined and validated in the protein–protein interaction subnetworks. Extensive literature review also confirmed many new members of the disease classes and pathways as well as the predicted associations between disease phenotype classes and pathways. PMID:22735708

  2. Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Negro Maggio, Valentina; Iocchi, Luca

    2015-02-01

    Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.

  3. Use of the serum anti-Müllerian hormone assay as a surrogate for polycystic ovarian morphology: impact on diagnosis and phenotypic classification of polycystic ovary syndrome.

    PubMed

    Fraissinet, Alice; Robin, Geoffroy; Pigny, Pascal; Lefebvre, Tiphaine; Catteau-Jonard, Sophie; Dewailly, Didier

    2017-08-01

    Does the use of the serum anti-Müllerian hormone (AMH) assay to replace or complement ultrasound (U/S) affect the diagnosis or phenotypic distribution of polycystic ovary syndrome (PCOS)? Combining U/S and the serum AMH assay to define polycystic ovarian morphology (PCOM) diagnoses PCOS (according to the Rotterdam classification) in more patients than definitions using one or the other of these indicators exclusively. Since 2003, PCOM, as defined by U/S, is one of the three diagnostic criteria for PCOS. As it is closely correlated with follicle excess seen at U/S, an excessive serum AMH level could be used as a surrogate for PCOM. Single-center retrospective study from a database of prospectively collected clinical, laboratory and ultrasound data from patients referred for oligo-anovulation (OA) and/or hyperandrogenism (HA) between January 2009 and January 2016. The standard Rotterdam classification for PCOS was tested against two modified versions that defined PCOM by either excessive serum AMH level alone (AMH-only) or a combination (i.e. 'and/or') of the latter and U/S. The PCOS phenotypes were defined as A (full phenotype, OA+HA+PCOM), B (OA+HA), C (HA+PCOM) and D (OA+PCOM). PCOS was more frequently diagnosed when PCOM was defined as the combination 'positive U/S' and/or 'positive AMH' (n = 639) than by either only U/S-only (standard definition, n = 612) or by AMH-only (n = 601). With this combination, PCOM was recognized in 637 of the 639 cases that met the Rotterdam classification, and phenotype B practically disappeared. In this population, U/S and AMH markers were discordant for PCOM in 103 (16.1%) cases (9% U/S-only, 7.1% AMH-only, P = 0.159). The markers used had no other significant impact on the phenotypic distribution (except for phenotype B). However, the percentage of cases positive by U/S-only was significantly higher in phenotype D than in phenotype A (14.1% vs. 5.8%, P < 0.05). Furthermore, in the discordant cases, plasma LH levels were significantly higher in the AMH-only group than in the concordant cases, and fasting insulin serum levels tended to be higher in the U/S-only group. This is a retrospective study. A referral bias explains the relatively high proportion of patients with phenotype D (28%). PCOM was defined by in-house thresholds. The AMH assay used is no longer commercially available. Our results suggest that ideally both U/S data and serum AMH level should be integrated to define PCOM in the Rotterdam classification. In a cost-effectiveness approach, the choice of one or the other has little impact on the diagnosis and the phenotyping of PCOS. No external funding. The authors have no conflict of interest to declare. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  4. NCI Workshop Report: Clinical and Computational Requirements for Correlating Imaging Phenotypes with Genomics Signatures.

    PubMed

    Colen, Rivka; Foster, Ian; Gatenby, Robert; Giger, Mary Ellen; Gillies, Robert; Gutman, David; Heller, Matthew; Jain, Rajan; Madabhushi, Anant; Madhavan, Subha; Napel, Sandy; Rao, Arvind; Saltz, Joel; Tatum, James; Verhaak, Roeland; Whitman, Gary

    2014-10-01

    The National Cancer Institute (NCI) Cancer Imaging Program organized two related workshops on June 26-27, 2013, entitled "Correlating Imaging Phenotypes with Genomics Signatures Research" and "Scalable Computational Resources as Required for Imaging-Genomics Decision Support Systems." The first workshop focused on clinical and scientific requirements, exploring our knowledge of phenotypic characteristics of cancer biological properties to determine whether the field is sufficiently advanced to correlate with imaging phenotypes that underpin genomics and clinical outcomes, and exploring new scientific methods to extract phenotypic features from medical images and relate them to genomics analyses. The second workshop focused on computational methods that explore informatics and computational requirements to extract phenotypic features from medical images and relate them to genomics analyses and improve the accessibility and speed of dissemination of existing NIH resources. These workshops linked clinical and scientific requirements of currently known phenotypic and genotypic cancer biology characteristics with imaging phenotypes that underpin genomics and clinical outcomes. The group generated a set of recommendations to NCI leadership and the research community that encourage and support development of the emerging radiogenomics research field to address short-and longer-term goals in cancer research.

  5. TR-DB: an open-access database of compounds affecting the ethylene-induced triple response in Arabidopsis.

    PubMed

    Hu, Yuming; Callebert, Pieter; Vandemoortel, Ilse; Nguyen, Long; Audenaert, Dominique; Verschraegen, Luc; Vandenbussche, Filip; Van Der Straeten, Dominique

    2014-02-01

    Small molecules which act as hormone agonists or antagonists represent useful tools in fundamental research and are widely applied in agriculture to control hormone effects. High-throughput screening of large chemical compound libraries has yielded new findings in plant biology, with possible future applications in agriculture and horticulture. To further understand ethylene biosynthesis/signaling and its crosstalk with other hormones, we screened a 12,000 compound chemical library based on an ethylene-related bioassay of dark-grown Arabidopsis thaliana (L.) Heynh. seedlings. From the initial screening, 1313 (∼11%) biologically active small molecules altering the phenotype triggered by the ethylene precursor 1-aminocyclopropane-1-carboxylic acid (ACC), were identified. Selection and sorting in classes were based on the angle of curvature of the apical hook, the length and width of the hypocotyl and the root. A MySQL-database was constructed (https://chaos.ugent.be/WE15/) including basic chemical information on the compounds, images illustrating the phenotypes, phenotype descriptions and classification. The research perspectives for different classes of hit compounds will be evaluated, and some general screening tips for customized high-throughput screening and pitfalls will be discussed. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  6. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  7. Lossless Compression of Classification-Map Data

    NASA Technical Reports Server (NTRS)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  8. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-11-01

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Retinex Preprocessing for Improved Multi-Spectral Image Classification

    NASA Technical Reports Server (NTRS)

    Thompson, B.; Rahman, Z.; Park, S.

    2000-01-01

    The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original images, without preprocessing, are much less similar.

  10. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    NASA Astrophysics Data System (ADS)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  11. The Importance of Clinical Phenotype in Understanding and Preventing Spontaneous Preterm Birth.

    PubMed

    Esplin, M Sean

    2016-02-01

    Spontaneous preterm birth (SPTB) is a well-known cause of maternal and neonatal morbidity. The search for the underlying pathways, documentation of the genetic causes, and identification of markers of spontaneous PTB have been marginally successful due to the fact that it is highly complex, with numerous processes that lead to a final common pathway. There is a great need for a comprehensive, consistent, and uniform classification system, which will be useful in identifying mechanisms, assigning prognosis, aiding in clinical management, and can identify areas of interest for intervention and future study. Effective classification systems must overcome obstacles including the lack of widely accepted definitions and uncertainty about inclusion of classifying features (e.g., presentation at delivery and multiple gestations) and levels of detail of these features. The optimal classification system should be based on the clinical phenotype, including characteristics of the mother, fetus, placenta, and the presentation for delivery. We present a proposed phenotyping system for spontaneous PTB. Future classification systems must establish a universally accepted set of definitions and a standardized clinical workup for all PTBs including the minimum clinical data to be collected and the laboratory and pathologic evaluation that should be completed. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  12. On the Implementation of a Land Cover Classification System for SAR Images Using Khoros

    NASA Technical Reports Server (NTRS)

    Medina Revera, Edwin J.; Espinosa, Ramon Vasquez

    1997-01-01

    The Synthetic Aperture Radar (SAR) sensor is widely used to record data about the ground under all atmospheric conditions. The SAR acquired images have very good resolution which necessitates the development of a classification system that process the SAR images to extract useful information for different applications. In this work, a complete system for the land cover classification was designed and programmed using the Khoros, a data flow visual language environment, taking full advantages of the polymorphic data services that it provides. Image analysis was applied to SAR images to improve and automate the processes of recognition and classification of the different regions like mountains and lakes. Both unsupervised and supervised classification utilities were used. The unsupervised classification routines included the use of several Classification/Clustering algorithms like the K-means, ISO2, Weighted Minimum Distance, and the Localized Receptive Field (LRF) training/classifier. Different texture analysis approaches such as Invariant Moments, Fractal Dimension and Second Order statistics were implemented for supervised classification of the images. The results and conclusions for SAR image classification using the various unsupervised and supervised procedures are presented based on their accuracy and performance.

  13. Semantic and topological classification of images in magnetically guided capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Mewes, P. W.; Rennert, P.; Juloski, A. L.; Lalande, A.; Angelopoulou, E.; Kuth, R.; Hornegger, J.

    2012-03-01

    Magnetically-guided capsule endoscopy (MGCE) is a nascent technology with the goal to allow the steering of a capsule endoscope inside a water filled stomach through an external magnetic field. We developed a classification cascade for MGCE images with groups images in semantic and topological categories. Results can be used in a post-procedure review or as a starting point for algorithms classifying pathologies. The first semantic classification step discards over-/under-exposed images as well as images with a large amount of debris. The second topological classification step groups images with respect to their position in the upper gastrointestinal tract (mouth, esophagus, stomach, duodenum). In the third stage two parallel classifications steps distinguish topologically different regions inside the stomach (cardia, fundus, pylorus, antrum, peristaltic view). For image classification, global image features and local texture features were applied and their performance was evaluated. We show that the third classification step can be improved by a bubble and debris segmentation because it limits feature extraction to discriminative areas only. We also investigated the impact of segmenting intestinal folds on the identification of different semantic camera positions. The results of classifications with a support-vector-machine show the significance of color histogram features for the classification of corrupted images (97%). Features extracted from intestinal fold segmentation lead only to a minor improvement (3%) in discriminating different camera positions.

  14. Creating a classification of image types in the medical literature for visual categorization

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Kalpathy-Cramer, Jayashree; Demner-Fushman, Dina; Antani, Sameer

    2012-02-01

    Content-based image retrieval (CBIR) from specialized collections has often been proposed for use in such areas as diagnostic aid, clinical decision support, and teaching. The visual retrieval from broad image collections such as teaching files, the medical literature or web images, by contrast, has not yet reached a high maturity level compared to textual information retrieval. Visual image classification into a relatively small number of classes (20-100) on the other hand, has shown to deliver good results in several benchmarks. It is, however, currently underused as a basic technology for retrieval tasks, for example, to limit the search space. Most classification schemes for medical images are focused on specific areas and consider mainly the medical image types (modalities), imaged anatomy, and view, and merge them into a single descriptor or classification hierarchy. Furthermore, they often ignore other important image types such as biological images, statistical figures, flowcharts, and diagrams that frequently occur in the biomedical literature. Most of the current classifications have also been created for radiology images, which are not the only types to be taken into account. With Open Access becoming increasingly widespread particularly in medicine, images from the biomedical literature are more easily available for use. Visual information from these images and knowledge that an image is of a specific type or medical modality could enrich retrieval. This enrichment is hampered by the lack of a commonly agreed image classification scheme. This paper presents a hierarchy for classification of biomedical illustrations with the goal of using it for visual classification and thus as a basis for retrieval. The proposed hierarchy is based on relevant parts of existing terminologies, such as the IRMA-code (Image Retrieval in Medical Applications), ad hoc classifications and hierarchies used in imageCLEF (Image retrieval task at the Cross-Language Evaluation Forum) and NLM's (National Library of Medicine) OpenI. Furtheron, mappings to NLM's MeSH (Medical Subject Headings), RSNA's RadLex (Radiological Society of North America, Radiology Lexicon), and the IRMA code are also attempted for relevant image types. Advantages derived from such hierarchical classification for medical image retrieval are being evaluated through benchmarks such as imageCLEF, and R&D systems such as NLM's OpenI. The goal is to extend this hierarchy progressively and (through adding image types occurring in the biomedical literature) to have a terminology for visual image classification based on image types distinguishable by visual means and occurring in the medical open access literature.

  15. Classification of visible and infrared hyperspectral images based on image segmentation and edge-preserving filtering

    NASA Astrophysics Data System (ADS)

    Cui, Binge; Ma, Xiudan; Xie, Xiaoyun; Ren, Guangbo; Ma, Yi

    2017-03-01

    The classification of hyperspectral images with a few labeled samples is a major challenge which is difficult to meet unless some spatial characteristics can be exploited. In this study, we proposed a novel spectral-spatial hyperspectral image classification method that exploited spatial autocorrelation of hyperspectral images. First, image segmentation is performed on the hyperspectral image to assign each pixel to a homogeneous region. Second, the visible and infrared bands of hyperspectral image are partitioned into multiple subsets of adjacent bands, and each subset is merged into one band. Recursive edge-preserving filtering is performed on each merged band which utilizes the spectral information of neighborhood pixels. Third, the resulting spectral and spatial feature band set is classified using the SVM classifier. Finally, bilateral filtering is performed to remove "salt-and-pepper" noise in the classification result. To preserve the spatial structure of hyperspectral image, edge-preserving filtering is applied independently before and after the classification process. Experimental results on different hyperspectral images prove that the proposed spectral-spatial classification approach is robust and offers more classification accuracy than state-of-the-art methods when the number of labeled samples is small.

  16. Evaluation and integration of disparate classification systems for clefts of the lip

    PubMed Central

    Wang, Kathie H.; Heike, Carrie L.; Clarkson, Melissa D.; Mejino, Jose L. V.; Brinkley, James F.; Tse, Raymond W.; Birgfeld, Craig B.; Fitzsimons, David A.; Cox, Timothy C.

    2014-01-01

    Orofacial clefting is a common birth defect with wide phenotypic variability. Many systems have been developed to classify cleft patterns to facilitate diagnosis, management, surgical treatment, and research. In this review, we examine the rationale for different existing classification schemes and determine their inter-relationships, as well as strengths and deficiencies for subclassification of clefts of the lip. The various systems differ in how they describe and define attributes of cleft lip (CL) phenotypes. Application and analysis of the CL classifications reveal discrepancies that may result in errors when comparing studies that use different systems. These inconsistencies in terminology, variable levels of subclassification, and ambiguity in some descriptions may confound analyses and impede further research aimed at understanding the genetics and etiology of clefts, development of effective treatment options for patients, as well as cross-institutional comparisons of outcome measures. Identification and reconciliation of discrepancies among existing systems is the first step toward creating a common standard to allow for a more explicit interpretation that will ultimately lead to a better understanding of the causes and manifestations of phenotypic variations in clefting. PMID:24860508

  17. The Evolving Classification of Pulmonary Hypertension.

    PubMed

    Foshat, Michelle; Boroumand, Nahal

    2017-05-01

    - An explosion of information on pulmonary hypertension has occurred during the past few decades. The perception of this disease has shifted from purely clinical to incorporate new knowledge of the underlying pathology. This transfer has occurred in light of advancements in pathophysiology, histology, and molecular medical diagnostics. - To update readers about the evolving understanding of the etiology and pathogenesis of pulmonary hypertension and to demonstrate how pathology has shaped the current classification. - Information presented at the 5 World Symposia on pulmonary hypertension held since 1973, with the last meeting occurring in 2013, was used in this review. - Pulmonary hypertension represents a heterogeneous group of disorders that are differentiated based on differences in clinical, hemodynamic, and histopathologic features. Early concepts of pulmonary hypertension were largely influenced by pharmacotherapy, hemodynamic function, and clinical presentation of the disease. The initial nomenclature for pulmonary hypertension segregated the clinical classifications from pathologic subtypes. Major restructuring of this disease classification occurred between the first and second symposia, which was the first to unite clinical and pathologic information in the categorization scheme. Additional changes were introduced in subsequent meetings, particularly between the third and fourth World Symposia meetings, when additional pathophysiologic information was gained. Discoveries in molecular diagnostics significantly progressed the understanding of idiopathic pulmonary arterial hypertension. Continued advancements in imaging modalities, mechanistic pathogenicity, and molecular biomarkers will enable physicians to define pulmonary hypertension phenotypes based on the pathobiology and allow for treatment customization.

  18. Land Cover Analysis by Using Pixel-Based and Object-Based Image Classification Method in Bogor

    NASA Astrophysics Data System (ADS)

    Amalisana, Birohmatin; Rokhmatullah; Hernina, Revi

    2017-12-01

    The advantage of image classification is to provide earth’s surface information like landcover and time-series changes. Nowadays, pixel-based image classification technique is commonly performed with variety of algorithm such as minimum distance, parallelepiped, maximum likelihood, mahalanobis distance. On the other hand, landcover classification can also be acquired by using object-based image classification technique. In addition, object-based classification uses image segmentation from parameter such as scale, form, colour, smoothness and compactness. This research is aimed to compare the result of landcover classification and its change detection between parallelepiped pixel-based and object-based classification method. Location of this research is Bogor with 20 years range of observation from 1996 until 2016. This region is famous as urban areas which continuously change due to its rapid development, so that time-series landcover information of this region will be interesting.

  19. Automated radial basis function neural network based image classification system for diabetic retinopathy detection in retinal images

    NASA Astrophysics Data System (ADS)

    Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude

    2010-02-01

    Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.

  20. Feature extraction based on extended multi-attribute profiles and sparse autoencoder for remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman

    2018-02-01

    The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.

  1. Feature selection and classification of multiparametric medical images using bagging and SVM

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Resnick, Susan M.; Davatzikos, Christos

    2008-03-01

    This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.

  2. Cascade classification of endocytoscopic images of colorectal lesions for automated pathological diagnosis

    NASA Astrophysics Data System (ADS)

    Itoh, Hayato; Mori, Yuichi; Misawa, Masashi; Oda, Masahiro; Kudo, Shin-ei; Mori, Kensaku

    2018-02-01

    This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.

  3. Significance of perceptually relevant image decolorization for scene classification

    NASA Astrophysics Data System (ADS)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  4. Can glenoid wear be accurately assessed using x-ray imaging? Evaluating agreement of x-ray and magnetic resonance imaging (MRI) Walch classification.

    PubMed

    Kopka, Michaela; Fourman, Mitchell; Soni, Ashish; Cordle, Andrew C; Lin, Albert

    2017-09-01

    The Walch classification is the most recognized means of assessing glenoid wear in preoperative planning for shoulder arthroplasty. This classification relies on advanced imaging, which is more expensive and less practical than plain radiographs. The purpose of this study was to determine whether the Walch classification could be accurately applied to x-ray images compared with magnetic resonance imaging (MRI) as the gold standard. We hypothesized that x-ray images cannot adequately replace advanced imaging in the evaluation of glenoid wear. Preoperative axillary x-ray images and MRI scans of 50 patients assessed for shoulder arthroplasty were independently reviewed by 5 raters. Glenoid wear was individually classified according to the Walch classification using each imaging modality. The raters then collectively reviewed the MRI scans and assigned a consensus classification to serve as the gold standard. The κ coefficient was used to determine interobserver agreement for x-ray images and independent MRI reads, as well as the agreement between x-ray images and consensus MRI. The inter-rater agreement for x-ray images and MRIs was "moderate" (κ = 0.42 and κ = 0.47, respectively) for the 5-category Walch classification (A1, A2, B1, B2, C) and "moderate" (κ = 0.54 and κ = 0.59, respectively) for the 3-category Walch classification (A, B, C). The agreement between x-ray images and consensus MRI was much lower: "fair-to-moderate" (κ = 0.21-0.51) for the 5-category and "moderate" (κ = 0.36-0.60) for the 3-category Walch classification. The inter-rater agreement between x-ray images and consensus MRI is "fair-to-moderate." This is lower than the previously reported reliability of the Walch classification using computed tomography scans. Accordingly, x-ray images are inferior to advanced imaging when assessing glenoid wear. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  5. A spectrum fractal feature classification algorithm for agriculture crops with hyper spectrum image

    NASA Astrophysics Data System (ADS)

    Su, Junying

    2011-11-01

    A fractal dimension feature analysis method in spectrum domain for hyper spectrum image is proposed for agriculture crops classification. Firstly, a fractal dimension calculation algorithm in spectrum domain is presented together with the fast fractal dimension value calculation algorithm using the step measurement method. Secondly, the hyper spectrum image classification algorithm and flowchart is presented based on fractal dimension feature analysis in spectrum domain. Finally, the experiment result of the agricultural crops classification with FCL1 hyper spectrum image set with the proposed method and SAM (spectral angle mapper). The experiment results show it can obtain better classification result than the traditional SAM feature analysis which can fulfill use the spectrum information of hyper spectrum image to realize precision agricultural crops classification.

  6. Image-classification-based global dimming algorithm for LED backlights in LCDs

    NASA Astrophysics Data System (ADS)

    Qibin, Feng; Huijie, He; Dong, Han; Lei, Zhang; Guoqiang, Lv

    2015-07-01

    Backlight dimming can help LCDs reduce power consumption and improve CR. With fixed parameters, dimming algorithm cannot achieve satisfied effects for all kinds of images. The paper introduces an image-classification-based global dimming algorithm. The proposed classification method especially for backlight dimming is based on luminance and CR of input images. The parameters for backlight dimming level and pixel compensation are adaptive with image classifications. The simulation results show that the classification based dimming algorithm presents 86.13% power reduction improvement compared with dimming without classification, with almost same display quality. The prototype is developed. There are no perceived distortions when playing videos. The practical average power reduction of the prototype TV is 18.72%, compared with common TV without dimming.

  7. Phylogenetic classification of Aureobasidium pullulans strains for production of pullulan and xylanase

    USDA-ARS?s Scientific Manuscript database

    This study tests the hypothesis that phylogenetic classification can predict whether A. pullulans strains will produce useful levels of the commercial polysaccharide, pullulan, or the valuable enzyme, xylanase. To test this hypothesis, 19 strains of A. pullulans with previously described phenotypes...

  8. Application of Sensor Fusion to Improve Uav Image Classification

    NASA Astrophysics Data System (ADS)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  9. a Novel Framework for Remote Sensing Image Scene Classification

    NASA Astrophysics Data System (ADS)

    Jiang, S.; Zhao, H.; Wu, W.; Tan, Q.

    2018-04-01

    High resolution remote sensing (HRRS) images scene classification aims to label an image with a specific semantic category. HRRS images contain more details of the ground objects and their spatial distribution patterns than low spatial resolution images. Scene classification can bridge the gap between low-level features and high-level semantics. It can be applied in urban planning, target detection and other fields. This paper proposes a novel framework for HRRS images scene classification. This framework combines the convolutional neural network (CNN) and XGBoost, which utilizes CNN as feature extractor and XGBoost as a classifier. Then, this framework is evaluated on two different HRRS images datasets: UC-Merced dataset and NWPU-RESISC45 dataset. Our framework achieved satisfying accuracies on two datasets, which is 95.57 % and 83.35 % respectively. From the experiments result, our framework has been proven to be effective for remote sensing images classification. Furthermore, we believe this framework will be more practical for further HRRS scene classification, since it costs less time on training stage.

  10. Wishart Deep Stacking Network for Fast POLSAR Image Classification.

    PubMed

    Jiao, Licheng; Liu, Fang

    2016-05-11

    Inspired by the popular deep learning architecture - Deep Stacking Network (DSN), a specific deep model for polarimetric synthetic aperture radar (POLSAR) image classification is proposed in this paper, which is named as Wishart Deep Stacking Network (W-DSN). First of all, a fast implementation of Wishart distance is achieved by a special linear transformation, which speeds up the classification of POLSAR image and makes it possible to use this polarimetric information in the following Neural Network (NN). Then a single-hidden-layer neural network based on the fast Wishart distance is defined for POLSAR image classification, which is named as Wishart Network (WN) and improves the classification accuracy. Finally, a multi-layer neural network is formed by stacking WNs, which is in fact the proposed deep learning architecture W-DSN for POLSAR image classification and improves the classification accuracy further. In addition, the structure of WN can be expanded in a straightforward way by adding hidden units if necessary, as well as the structure of the W-DSN. As a preliminary exploration on formulating specific deep learning architecture for POLSAR image classification, the proposed methods may establish a simple but clever connection between POLSAR image interpretation and deep learning. The experiment results tested on real POLSAR image show that the fast implementation of Wishart distance is very efficient (a POLSAR image with 768000 pixels can be classified in 0.53s), and both the single-hidden-layer architecture WN and the deep learning architecture W-DSN for POLSAR image classification perform well and work efficiently.

  11. Object-based land cover classification based on fusion of multifrequency SAR data and THAICHOTE optical imagery

    NASA Astrophysics Data System (ADS)

    Sukawattanavijit, Chanika; Srestasathiern, Panu

    2017-10-01

    Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.

  12. Video based object representation and classification using multiple covariance matrices.

    PubMed

    Zhang, Yurong; Liu, Quan

    2017-01-01

    Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.

  13. Deep learning application: rubbish classification with aid of an android device

    NASA Astrophysics Data System (ADS)

    Liu, Sijiang; Jiang, Bo; Zhan, Jie

    2017-06-01

    Deep learning is a very hot topic currently in pattern recognition and artificial intelligence researches. Aiming at the practical problem that people usually don't know correct classifications some rubbish should belong to, based on the powerful image classification ability of the deep learning method, we have designed a prototype system to help users to classify kinds of rubbish. Firstly the CaffeNet Model was adopted for our classification network training on the ImageNet dataset, and the trained network was deployed on a web server. Secondly an android app was developed for users to capture images of unclassified rubbish, upload images to the web server for analyzing backstage and retrieve the feedback, so that users can obtain the classification guide by an android device conveniently. Tests on our prototype system of rubbish classification show that: an image of one single type of rubbish with origin shape can be better used to judge its classification, while an image containing kinds of rubbish or rubbish with changed shape may fail to help users to decide rubbish's classification. However, the system still shows promising auxiliary function for rubbish classification if the network training strategy can be optimized further.

  14. Phenotype detection in morphological mutant mice using deformation features.

    PubMed

    Roy, Sharmili; Liang, Xi; Kitamoto, Asanobu; Tamura, Masaru; Shiroishi, Toshihiko; Brown, Michael S

    2013-01-01

    Large-scale global efforts are underway to knockout each of the approximately 25,000 mouse genes and interpret their roles in shaping the mammalian embryo. Given the tremendous amount of data generated by imaging mutated prenatal mice, high-throughput image analysis systems are inevitable to characterize mammalian development and diseases. Current state-of-the-art computational systems offer only differential volumetric analysis of pre-defined anatomical structures between various gene-knockout mice strains. For subtle anatomical phenotypes, embryo phenotyping still relies on the laborious histological techniques that are clearly unsuitable in such big data environment. This paper presents a system that automatically detects known phenotypes and assists in discovering novel phenotypes in muCT images of mutant mice. Deformation features obtained from non-linear registration of mutant embryo to a normal consensus average image are extracted and analyzed to compute phenotypic and candidate phenotypic areas. The presented system is evaluated using C57BL/10 embryo images. All cases of ventricular septum defect and polydactyly, well-known to be present in this strain, are successfully detected. The system predicts potential phenotypic areas in the liver that are under active histological evaluation for possible phenotype of this mouse line.

  15. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    PubMed Central

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  16. Cortical sensorimotor alterations classify clinical phenotype and putative genotype of spasmodic dysphonia.

    PubMed

    Battistella, G; Fuertinger, S; Fleysher, L; Ozelius, L J; Simonyan, K

    2016-10-01

    Spasmodic dysphonia (SD), or laryngeal dystonia, is a task-specific isolated focal dystonia of unknown causes and pathophysiology. Although functional and structural abnormalities have been described in this disorder, the influence of its different clinical phenotypes and genotypes remains scant, making it difficult to explain SD pathophysiology and to identify potential biomarkers. We used a combination of independent component analysis and linear discriminant analysis of resting-state functional magnetic resonance imaging data to investigate brain organization in different SD phenotypes (abductor versus adductor type) and putative genotypes (familial versus sporadic cases) and to characterize neural markers for genotype/phenotype categorization. We found abnormal functional connectivity within sensorimotor and frontoparietal networks in patients with SD compared with healthy individuals as well as phenotype- and genotype-distinct alterations of these networks, involving primary somatosensory, premotor and parietal cortices. The linear discriminant analysis achieved 71% accuracy classifying SD and healthy individuals using connectivity measures in the left inferior parietal and sensorimotor cortices. When categorizing between different forms of SD, the combination of measures from the left inferior parietal, premotor and right sensorimotor cortices achieved 81% discriminatory power between familial and sporadic SD cases, whereas the combination of measures from the right superior parietal, primary somatosensory and premotor cortices led to 71% accuracy in the classification of adductor and abductor SD forms. Our findings present the first effort to identify and categorize isolated focal dystonia based on its brain functional connectivity profile, which may have a potential impact on the future development of biomarkers for this rare disorder. © 2016 EAN.

  17. Active Learning Strategies for Phenotypic Profiling of High-Content Screens.

    PubMed

    Smith, Kevin; Horvath, Peter

    2014-06-01

    High-content screening is a powerful method to discover new drugs and carry out basic biological research. Increasingly, high-content screens have come to rely on supervised machine learning (SML) to perform automatic phenotypic classification as an essential step of the analysis. However, this comes at a cost, namely, the labeled examples required to train the predictive model. Classification performance increases with the number of labeled examples, and because labeling examples demands time from an expert, the training process represents a significant time investment. Active learning strategies attempt to overcome this bottleneck by presenting the most relevant examples to the annotator, thereby achieving high accuracy while minimizing the cost of obtaining labeled data. In this article, we investigate the impact of active learning on single-cell-based phenotype recognition, using data from three large-scale RNA interference high-content screens representing diverse phenotypic profiling problems. We consider several combinations of active learning strategies and popular SML methods. Our results show that active learning significantly reduces the time cost and can be used to reveal the same phenotypic targets identified using SML. We also identify combinations of active learning strategies and SML methods which perform better than others on the phenotypic profiling problems we studied. © 2014 Society for Laboratory Automation and Screening.

  18. Contribution of non-negative matrix factorization to the classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.

    2008-10-01

    Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.

  19. Cascaded deep decision networks for classification of endoscopic images

    NASA Astrophysics Data System (ADS)

    Murthy, Venkatesh N.; Singh, Vivek; Sun, Shanhui; Bhattacharya, Subhabrata; Chen, Terrence; Comaniciu, Dorin

    2017-02-01

    Both traditional and wireless capsule endoscopes can generate tens of thousands of images for each patient. It is desirable to have the majority of irrelevant images filtered out by automatic algorithms during an offline review process or to have automatic indication for highly suspicious areas during an online guidance. This also applies to the newly invented endomicroscopy, where online indication of tumor classification plays a significant role. Image classification is a standard pattern recognition problem and is well studied in the literature. However, performance on the challenging endoscopic images still has room for improvement. In this paper, we present a novel Cascaded Deep Decision Network (CDDN) to improve image classification performance over standard Deep neural network based methods. During the learning phase, CDDN automatically builds a network which discards samples that are classified with high confidence scores by a previously trained network and concentrates only on the challenging samples which would be handled by the subsequent expert shallow networks. We validate CDDN using two different types of endoscopic imaging, which includes a polyp classification dataset and a tumor classification dataset. From both datasets we show that CDDN can outperform other methods by about 10%. In addition, CDDN can also be applied to other image classification problems.

  20. Dystonia: an update on phenomenology, classification, pathogenesis and treatment.

    PubMed

    Balint, Bettina; Bhatia, Kailash P

    2014-08-01

    This article will highlight recent advances in dystonia with focus on clinical aspects such as the new classification, syndromic approach, new gene discoveries and genotype-phenotype correlations. Broadening of phenotype of some of the previously described hereditary dystonias and environmental risk factors and trends in treatment will be covered. Based on phenomenology, a new consensus update on the definition, phenomenology and classification of dystonia and a syndromic approach to guide diagnosis have been proposed. Terminology has changed and 'isolated dystonia' is used wherein dystonia is the only motor feature apart from tremor, and the previously called heredodegenerative dystonias and dystonia plus syndromes are now subsumed under 'combined dystonia'. The recently discovered genes ANO3, GNAL and CIZ1 appear not to be a common cause of adult-onset cervical dystonia. Clinical and genetic heterogeneity underlie myoclonus-dystonia, dopa-responsive dystonia and deafness-dystonia syndrome. ALS2 gene mutations are a newly recognized cause for combined dystonia. The phenotypic and genotypic spectra of ATP1A3 mutations have considerably broadened. Two new genome-wide association studies identified new candidate genes. A retrospective analysis suggested complicated vaginal delivery as a modifying risk factor in DYT1. Recent studies confirm lasting therapeutic effects of deep brain stimulation in isolated dystonia, good treatment response in myoclonus-dystonia, and suggest that early treatment correlates with a better outcome. Phenotypic classification continues to be important to recognize particular forms of dystonia and this includes syndromic associations. There are a number of genes underlying isolated or combined dystonia and there will be further new discoveries with the advances in genetic technologies such as exome and whole-genome sequencing. The identification of new genes will facilitate better elucidation of pathogenetic mechanisms and possible corrective therapies.

  1. Review of Medical Image Classification using the Adaptive Neuro-Fuzzy Inference System

    PubMed Central

    Hosseini, Monireh Sheikh; Zekri, Maryam

    2012-01-01

    Image classification is an issue that utilizes image processing, pattern recognition and classification methods. Automatic medical image classification is a progressive area in image classification, and it is expected to be more developed in the future. Because of this fact, automatic diagnosis can assist pathologists by providing second opinions and reducing their workload. This paper reviews the application of the adaptive neuro-fuzzy inference system (ANFIS) as a classifier in medical image classification during the past 16 years. ANFIS is a fuzzy inference system (FIS) implemented in the framework of an adaptive fuzzy neural network. It combines the explicit knowledge representation of an FIS with the learning power of artificial neural networks. The objective of ANFIS is to integrate the best features of fuzzy systems and neural networks. A brief comparison with other classifiers, main advantages and drawbacks of this classifier are investigated. PMID:23493054

  2. Validation of the international labour office digitized standard images for recognition and classification of radiographs of pneumoconiosis.

    PubMed

    Halldin, Cara N; Petsonk, Edward L; Laney, A Scott

    2014-03-01

    Chest radiographs are recommended for prevention and detection of pneumoconiosis. In 2011, the International Labour Office (ILO) released a revision of the International Classification of Radiographs of Pneumoconioses that included a digitized standard images set. The present study compared results of classifications of digital chest images performed using the new ILO 2011 digitized standard images to classification approaches used in the past. Underground coal miners (N = 172) were examined using both digital and film-screen radiography (FSR) on the same day. Seven National Institute for Occupational Safety and Health-certified B Readers independently classified all 172 digital radiographs, once using the ILO 2011 digitized standard images (DRILO2011-D) and once using digitized standard images used in the previous research (DRRES). The same seven B Readers classified all the miners' chest films using the ILO film-based standards. Agreement between classifications of FSR and digital radiography was identical, using a standard image set (either DRILO2011-D or DRRES). The overall weighted κ value was 0.58. Some specific differences in the results were seen and noted. However, intrareader variability in this study was similar to the published values and did not appear to be affected by the use of the new ILO 2011 digitized standard images. These findings validate the use of the ILO digitized standard images for classification of small pneumoconiotic opacities. When digital chest radiographs are obtained and displayed appropriately, results of pneumoconiosis classifications using the 2011 ILO digitized standards are comparable to film-based ILO classifications and to classifications using earlier research standards. Published by Elsevier Inc.

  3. Using multidimensional topological data analysis to identify traits of hip osteoarthritis.

    PubMed

    Rossi-deVries, Jasmine; Pedoia, Valentina; Samaan, Michael A; Ferguson, Adam R; Souza, Richard B; Majumdar, Sharmila

    2018-05-07

    Osteoarthritis (OA) is a multifaceted disease with many variables affecting diagnosis and progression. Topological data analysis (TDA) is a state-of-the-art big data analytics tool that can combine all variables into multidimensional space. TDA is used to simultaneously analyze imaging and gait analysis techniques. To identify biochemical and biomechanical biomarkers able to classify different disease progression phenotypes in subjects with and without radiographic signs of hip OA. Longitudinal study for comparison of progressive and nonprogressive subjects. In all, 102 subjects with and without radiographic signs of hip osteoarthritis. 3T, SPGR 3D MAPSS T 1ρ /T 2 , intermediate-weighted fat-suppressed fast spin-echo (FSE). Multidimensional data analysis including cartilage composition, bone shape, Kellgren-Lawrence (KL) classification of osteoarthritis, scoring hip osteoarthritis with MRI (SHOMRI), hip disability and osteoarthritis outcome score (HOOS). Analysis done using TDA, Kolmogorov-Smirnov (KS) testing, and Benjamini-Hochberg to rank P-value results to correct for multiple comparisons. Subjects in the later stages of the disease had an increased SHOMRI score (P < 0.0001), increased KL (P = 0.0012), and older age (P < 0.0001). Subjects in the healthier group showed intact cartilage and less pain. Subjects found between these two groups had a range of symptoms. Analysis of this subgroup identified knee biomechanics (P < 0.0001) as an initial marker of the disease that is noticeable before the morphological progression and degeneration. Further analysis of an OA subgroup with femoroacetabular impingement (FAI) showed anterior labral tears to be the most significant marker (P = 0.0017) between those FAI subjects with and without OA symptoms. The data-driven analysis obtained with TDA proposes new phenotypes of these subjects that partially overlap with the radiographic-based classical disease status classification and also shows the potential for further examination of an early onset biomechanical intervention. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.

  4. Spanish Guidelines for Management of Chronic Obstructive Pulmonary Disease (GesEPOC) 2017. Pharmacological Treatment of Stable Phase.

    PubMed

    Miravitlles, Marc; Soler-Cataluña, Juan José; Calle, Myriam; Molina, Jesús; Almagro, Pere; Quintano, José Antonio; Trigueros, Juan Antonio; Cosío, Borja G; Casanova, Ciro; Antonio Riesco, Juan; Simonet, Pere; Rigau, David; Soriano, Joan B; Ancochea, Julio

    2017-06-01

    The clinical presentation of chronic obstructive pulmonary disease (COPD) varies widely, so treatment must be tailored according to the level of risk and phenotype. In 2012, the Spanish COPD Guidelines (GesEPOC) first established pharmacological treatment regimens based on clinical phenotypes. These regimens were subsequently adopted by other national guidelines, and since then, have been backed up by new evidence. In this 2017 update, the original severity classification has been replaced by a much simpler risk classification (low or high risk), on the basis of lung function, dyspnea grade, and history of exacerbations, while determination of clinical phenotype is recommended only in high-risk patients. The same clinical phenotypes have been maintained: non-exacerbator, asthma-COPD overlap (ACO), exacerbator with emphysema, and exacerbator with bronchitis. Pharmacological treatment of COPD is based on bronchodilators, the only treatment recommended in low-risk patients. High-risk patients will receive different drugs in addition to bronchodilators, depending on their clinical phenotype. GesEPOC reflects a more individualized approach to COPD treatment, according to patient clinical characteristics and level of risk or complexity. Copyright © 2017 SEPAR. Publicado por Elsevier España, S.L.U. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, H; Lan, L; Sennett, C

    Purpose: To gain insight into the role of parenchyma stroma in the characterization of breast tumors by incorporating computerized mammographic parenchyma assessment into breast CADx in the task of distinguishing between malignant and benign lesions. Methods: This study was performed on 182 biopsy-proven breast mass lesions, including 76 benign and 106 malignant lesions. For each full-field digital mammogram (FFDM) case, our quantitative imaging analysis was performed on both the tumor and a region-of-interest (ROI) from the normal contralateral breast. The lesion characterization includes automatic lesion segmentation and feature extraction. Radiographic texture analysis (RTA) was applied on the normal ROIs tomore » assess the mammographic parenchymal patterns of these contralateral normal breasts. Classification performance of both individual computer extracted features and the output from a Bayesian artificial neural network (BANN) were evaluated with a leave-one-lesion-out method using receiver operating characteristic (ROC) analysis with area under the curve (AUC) as the figure of merit. Results: Lesion characterization included computer-extracted phenotypes of spiculation, size, shape, and margin. For parenchymal pattern characterization, five texture features were selected, including power law beta, contrast, and edge gradient. Merging of these computer-selected features using BANN classifiers yielded AUC values of 0.79 (SE=0.03) and 0.67 (SE=0.04) in the task of distinguishing between malignant and benign lesions using only tumor phenotypes and texture features from the contralateral breasts, respectively. Incorporation of tumor phenotypes with parenchyma texture features into the BANN yielded improved classification performance with an AUC value of 0.83 (SE=0.03) in the task of differentiating malignant from benign lesions. Conclusion: Combining computerized tumor and parenchyma phenotyping was found to significantly improve breast cancer diagnostic accuracy highlighting the need to consider both tumor and stroma in decision making. Funding: University of Chicago Dean Bridge Fund, NCI U24-CA143848-05, P50-CA58223 Breast SPORE program, and Breast Cancer Research Foundation. COI: MLG is a stockholder in R2 technology/Hologic and receives royalties from Hologic, GE Medical Systems, MEDIAN Technologies, Riverain Medical, Mitsubishi, and Toshiba. MLG is a cofounder and stockholder in Quantitative Insights.« less

  6. Stroke subtyping for genetic association studies? A comparison of the CCS and TOAST classifications.

    PubMed

    Lanfranconi, Silvia; Markus, Hugh S

    2013-12-01

    A reliable and reproducible classification system of stroke subtype is essential for epidemiological and genetic studies. The Causative Classification of Stroke system is an evidence-based computerized algorithm with excellent inter-rater reliability. It has been suggested that, compared to the Trial of ORG 10172 in Acute Stroke Treatment classification, it increases the proportion of cases with defined subtype that may increase power in genetic association studies. We compared Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system classifications in a large cohort of well-phenotyped stroke patients. Six hundred ninety consecutively recruited patients with first-ever ischemic stroke were classified, using review of clinical data and original imaging, according to the Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system classifications. There was excellent agreement subtype assigned by between Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke system (kappa = 0·85). The agreement was excellent for the major individual subtypes: large artery atherosclerosis kappa = 0·888, small-artery occlusion kappa = 0·869, cardiac embolism kappa = 0·89, and undetermined category kappa = 0·884. There was only moderate agreement (kappa = 0·41) for the subjects with at least two competing underlying mechanism. Thirty-five (5·8%) patients classified as undetermined by Trial of ORG 10172 in Acute Stroke Treatment were assigned to a definite subtype by Causative Classification of Stroke system. Thirty-two subjects assigned to a definite subtype by Trial of ORG 10172 in Acute Stroke Treatment were classified as undetermined by Causative Classification of Stroke system. There is excellent agreement between classification using Trial of ORG 10172 in Acute Stroke Treatment and Causative Classification of Stroke systems but no evidence that Causative Classification of Stroke system reduced the proportion of patients classified to undetermined subtypes. The excellent inter-rater reproducibility and web-based semiautomated nature make Causative Classification of Stroke system suitable for multicenter studies, but the benefit of reclassifying cases already classified using the Trial of ORG 10172 in Acute Stroke Treatment system on existing databases is likely to be small. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.

  7. Land-cover classification in a moist tropical region of Brazil with Landsat TM imagery.

    PubMed

    Li, Guiying; Lu, Dengsheng; Moran, Emilio; Hetrick, Scott

    2011-01-01

    This research aims to improve land-cover classification accuracy in a moist tropical region in Brazil by examining the use of different remote sensing-derived variables and classification algorithms. Different scenarios based on Landsat Thematic Mapper (TM) spectral data and derived vegetation indices and textural images, and different classification algorithms - maximum likelihood classification (MLC), artificial neural network (ANN), classification tree analysis (CTA), and object-based classification (OBC), were explored. The results indicated that a combination of vegetation indices as extra bands into Landsat TM multispectral bands did not improve the overall classification performance, but the combination of textural images was valuable for improving vegetation classification accuracy. In particular, the combination of both vegetation indices and textural images into TM multispectral bands improved overall classification accuracy by 5.6% and kappa coefficient by 6.25%. Comparison of the different classification algorithms indicated that CTA and ANN have poor classification performance in this research, but OBC improved primary forest and pasture classification accuracies. This research indicates that use of textural images or use of OBC are especially valuable for improving the vegetation classes such as upland and liana forest classes having complex stand structures and having relatively large patch sizes.

  8. Land-cover classification in a moist tropical region of Brazil with Landsat TM imagery

    PubMed Central

    LI, GUIYING; LU, DENGSHENG; MORAN, EMILIO; HETRICK, SCOTT

    2011-01-01

    This research aims to improve land-cover classification accuracy in a moist tropical region in Brazil by examining the use of different remote sensing-derived variables and classification algorithms. Different scenarios based on Landsat Thematic Mapper (TM) spectral data and derived vegetation indices and textural images, and different classification algorithms – maximum likelihood classification (MLC), artificial neural network (ANN), classification tree analysis (CTA), and object-based classification (OBC), were explored. The results indicated that a combination of vegetation indices as extra bands into Landsat TM multispectral bands did not improve the overall classification performance, but the combination of textural images was valuable for improving vegetation classification accuracy. In particular, the combination of both vegetation indices and textural images into TM multispectral bands improved overall classification accuracy by 5.6% and kappa coefficient by 6.25%. Comparison of the different classification algorithms indicated that CTA and ANN have poor classification performance in this research, but OBC improved primary forest and pasture classification accuracies. This research indicates that use of textural images or use of OBC are especially valuable for improving the vegetation classes such as upland and liana forest classes having complex stand structures and having relatively large patch sizes. PMID:22368311

  9. Classification of large-scale fundus image data sets: a cloud-computing framework.

    PubMed

    Roychowdhury, Sohini

    2016-08-01

    Large medical image data sets with high dimensionality require substantial amount of computation time for data creation and data processing. This paper presents a novel generalized method that finds optimal image-based feature sets that reduce computational time complexity while maximizing overall classification accuracy for detection of diabetic retinopathy (DR). First, region-based and pixel-based features are extracted from fundus images for classification of DR lesions and vessel-like structures. Next, feature ranking strategies are used to distinguish the optimal classification feature sets. DR lesion and vessel classification accuracies are computed using the boosted decision tree and decision forest classifiers in the Microsoft Azure Machine Learning Studio platform, respectively. For images from the DIARETDB1 data set, 40 of its highest-ranked features are used to classify four DR lesion types with an average classification accuracy of 90.1% in 792 seconds. Also, for classification of red lesion regions and hemorrhages from microaneurysms, accuracies of 85% and 72% are observed, respectively. For images from STARE data set, 40 high-ranked features can classify minor blood vessels with an accuracy of 83.5% in 326 seconds. Such cloud-based fundus image analysis systems can significantly enhance the borderline classification performances in automated screening systems.

  10. Enabling phenotypic big data with PheNorm.

    PubMed

    Yu, Sheng; Ma, Yumeng; Gronsbell, Jessica; Cai, Tianrun; Ananthakrishnan, Ashwin N; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Liao, Katherine P; Cai, Tianxi

    2018-01-01

    Electronic health record (EHR)-based phenotyping infers whether a patient has a disease based on the information in his or her EHR. A human-annotated training set with gold-standard disease status labels is usually required to build an algorithm for phenotyping based on a set of predictive features. The time intensiveness of annotation and feature curation severely limits the ability to achieve high-throughput phenotyping. While previous studies have successfully automated feature curation, annotation remains a major bottleneck. In this paper, we present PheNorm, a phenotyping algorithm that does not require expert-labeled samples for training. The most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype, are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for accurate disease classification. We validated the accuracy of PheNorm with 4 phenotypes: coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis. The AUCs of the PheNorm score reached 0.90, 0.94, 0.95, and 0.94 for the 4 phenotypes, respectively, which were comparable to the accuracy of supervised algorithms trained with sample sizes of 100-300, with no statistically significant difference. The accuracy of the PheNorm algorithms is on par with algorithms trained with annotated samples. PheNorm fully automates the generation of accurate phenotyping algorithms and demonstrates the capacity for EHR-driven annotations to scale to the next level - phenotypic big data. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  11. Natural image classification driven by human brain activity

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Peng, Hanyang; Wang, Jinqiao; Tang, Ming; Xue, Rong; Zuo, Zhentao

    2016-03-01

    Natural image classification has been a hot topic in computer vision and pattern recognition research field. Since the performance of an image classification system can be improved by feature selection, many image feature selection methods have been developed. However, the existing supervised feature selection methods are typically driven by the class label information that are identical for different samples from the same class, ignoring with-in class image variability and therefore degrading the feature selection performance. In this study, we propose a novel feature selection method, driven by human brain activity signals collected using fMRI technique when human subjects were viewing natural images of different categories. The fMRI signals associated with subjects viewing different images encode the human perception of natural images, and therefore may capture image variability within- and cross- categories. We then select image features with the guidance of fMRI signals from brain regions with active response to image viewing. Particularly, bag of words features based on GIST descriptor are extracted from natural images for classification, and a sparse regression base feature selection method is adapted to select image features that can best predict fMRI signals. Finally, a classification model is built on the select image features to classify images without fMRI signals. The validation experiments for classifying images from 4 categories of two subjects have demonstrated that our method could achieve much better classification performance than the classifiers built on image feature selected by traditional feature selection methods.

  12. Spatial-spectral blood cell classification with microscopic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ran, Qiong; Chang, Lan; Li, Wei; Xu, Xiaofeng

    2017-10-01

    Microscopic hyperspectral images provide a new way for blood cell examination. The hyperspectral imagery can greatly facilitate the classification of different blood cells. In this paper, the microscopic hyperspectral images are acquired by connecting the microscope and the hyperspectral imager, and then tested for blood cell classification. For combined use of the spectral and spatial information provided by hyperspectral images, a spatial-spectral classification method is improved from the classical extreme learning machine (ELM) by integrating spatial context into the image classification task with Markov random field (MRF) model. Comparisons are done among ELM, ELM-MRF, support vector machines(SVM) and SVMMRF methods. Results show the spatial-spectral classification methods(ELM-MRF, SVM-MRF) perform better than pixel-based methods(ELM, SVM), and the proposed ELM-MRF has higher precision and show more accurate location of cells.

  13. Rotationally Invariant Image Representation for Viewing Direction Classification in Cryo-EM

    PubMed Central

    Zhao, Zhizhen; Singer, Amit

    2014-01-01

    We introduce a new rotationally invariant viewing angle classification method for identifying, among a large number of cryo-EM projection images, similar views without prior knowledge of the molecule. Our rotationally invariant features are based on the bispectrum. Each image is denoised and compressed using steerable principal component analysis (PCA) such that rotating an image is equivalent to phase shifting the expansion coefficients. Thus we are able to extend the theory of bispectrum of 1D periodic signals to 2D images. The randomized PCA algorithm is then used to efficiently reduce the dimensionality of the bispectrum coefficients, enabling fast computation of the similarity between any pair of images. The nearest neighbors provide an initial classification of similar viewing angles. In this way, rotational alignment is only performed for images with their nearest neighbors. The initial nearest neighbor classification and alignment are further improved by a new classification method called vector diffusion maps. Our pipeline for viewing angle classification and alignment is experimentally shown to be faster and more accurate than reference-free alignment with rotationally invariant K-means clustering, MSA/MRA 2D classification, and their modern approximations. PMID:24631969

  14. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.

    PubMed

    Li, Linyi; Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  15. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    PubMed Central

    Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440

  16. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    PubMed Central

    Wang, Guizhou; Liu, Jianbo; He, Guojin

    2013-01-01

    This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808

  17. Drug related webpages classification using images and text information based on multi-kernel learning

    NASA Astrophysics Data System (ADS)

    Hu, Ruiguang; Xiao, Liping; Zheng, Wenjuan

    2015-12-01

    In this paper, multi-kernel learning(MKL) is used for drug-related webpages classification. First, body text and image-label text are extracted through HTML parsing, and valid images are chosen by the FOCARSS algorithm. Second, text based BOW model is used to generate text representation, and image-based BOW model is used to generate images representation. Last, text and images representation are fused with a few methods. Experimental results demonstrate that the classification accuracy of MKL is higher than those of all other fusion methods in decision level and feature level, and much higher than the accuracy of single-modal classification.

  18. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  19. The research on medical image classification algorithm based on PLSA-BOW model.

    PubMed

    Cao, C H; Cao, H L

    2016-04-29

    With the rapid development of modern medical imaging technology, medical image classification has become more important for medical diagnosis and treatment. To solve the existence of polysemous words and synonyms problem, this study combines the word bag model with PLSA (Probabilistic Latent Semantic Analysis) and proposes the PLSA-BOW (Probabilistic Latent Semantic Analysis-Bag of Words) model. In this paper we introduce the bag of words model in text field to image field, and build the model of visual bag of words model. The method enables the word bag model-based classification method to be further improved in accuracy. The experimental results show that the PLSA-BOW model for medical image classification can lead to a more accurate classification.

  20. Morphology in the Digital Age: Integrating High Resolution Description of Structural Alterations with Phenotypes and Genotypes

    PubMed Central

    Nast, Cynthia C.; Lemley, Kevin V.; Hodgin, Jeffrey B.; Bagnasco, Serena; Avila-Casado, Carmen; Hewitt, Stephen M; Barisoni, Laura

    2015-01-01

    Conventional light microscopy (CLM) has been used to characterize and classify renal diseases, evaluate histopathology in studies and trials, and educate renal pathologists and nephrologists. The advent of digital pathology, in which a glass slide can be scanned to create whole slide images (WSI) for viewing and manipulating on a computer monitor, provides real and potential advantages over CLM. Software tools such as annotation, morphometry and image analysis can be applied to WSIs for studies or educational purposes, and the digital images are globally available to clinicians, pathologists and investigators. New ways of assessing renal pathology with observational data collection may allow better morphologic correlations and integration with molecular and genetic signatures, refinements of classification schema, and understanding of disease pathogenesis. In multicenter studies, WSI, which require additional quality assurance steps, provide efficiencies by reducing slide shipping and consensus conference costs, and allowing anytime anywhere slide viewing. While validation studies for the routine diagnostic use of digital pathology still are needed, this is a powerful tool currently available for translational research, clinical trials and education in renal pathology. PMID:26215864

  1. Patients With Undetermined Stroke Have Increased Atrial Fibrosis: A Cardiac Magnetic Resonance Imaging Study.

    PubMed

    Fonseca, Ana Catarina; Alves, Pedro; Inácio, Nuno; Marto, João Pedro; Viana-Baptista, Miguel; Pinho-E-Melo, Teresa; Ferro, José M; Almeida, Ana G

    2018-03-01

    Some patients with ischemic strokes that are currently classified as having an undetermined cause may have structural or functional changes of the left atrium (LA) and left atrial appendage, which increase their risk of thromboembolism. We compared the LA and left atrial appendage of patients with different ischemic stroke causes using cardiac magnetic resonance imaging. We prospectively included a consecutive sample of ischemic stroke patients. Patients with structural changes on echocardiography currently considered as causal for stroke in the Trial of ORG 10172 in Acute Stroke Treatment (TOAST) classification were excluded. A 3-T cardiac magnetic resonance imaging was performed. One hundred and eleven patients were evaluated. Patients with an undetermined cause had a higher percentage of LA fibrosis ( P =0.03) than patients with other stroke causes and lower, although not statistically significant, values of LA ejection fraction. Patients with atrial fibrillation and undetermined stroke cause showed a similar value of atrial fibrosis. The LA phenotype that was found in patients with undetermined cause supports the hypothesis that an atrial disease may be associated with stroke. © 2018 American Heart Association, Inc.

  2. Tissue classification for laparoscopic image understanding based on multispectral texture analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Wirkert, Sebastian J.; Iszatt, Justin; Kenngott, Hannes; Wagner, Martin; Mayer, Benjamin; Stock, Christian; Clancy, Neil T.; Elson, Daniel S.; Maier-Hein, Lena

    2016-03-01

    Intra-operative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study we show (1) that multispectral imaging data is superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) that combining the tissue texture with the reflectance spectrum improves the classification performance. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.

  3. Cardiac macrophage biology in the steady-state heart, the aging heart, and following myocardial infarction

    PubMed Central

    Ma, Yonggang; Mouton, Alan J.; Lindsey, Merry L.

    2018-01-01

    Macrophages play critical roles in homeostatic maintenance of the myocardium under normal conditions and in tissue repair after injury. In the steady-state heart, resident cardiac macrophages remove senescent and dying cells and facilitate electrical conduction. In the aging heart, the shift in macrophage phenotype to a proinflammatory subtype leads to inflammaging. Following myocardial infarction (MI), macrophages recruited to the infarct produce both proinflammatory and anti-inflammatory mediators (cytokines, chemokines, matrix metalloproteinases, and growth factors), phagocytize dead cells, and promote angiogenesis and scar formation. These diverse properties are attributed to distinct macrophage subtypes and polarization status. Infarct macrophages exhibit a proinflammatory M1 phenotype early and become polarized toward an anti-inflammatory M2 phenotype later post- MI. Although this classification system is oversimplified and needs to be refined to accommodate the multiple different macrophage subtypes that have been recently identified, general concepts on macrophage roles are independent of subtype classification. This review summarizes current knowledge about cardiac macrophage origins, roles, and phenotypes in the steady state, with aging, and after MI, as well as highlights outstanding areas of investigation. PMID:29106912

  4. Validated and longitudinally stable asthma phenotypes based on cluster analysis of the ADEPT study.

    PubMed

    Loza, Matthew J; Djukanovic, Ratko; Chung, Kian Fan; Horowitz, Daniel; Ma, Keying; Branigan, Patrick; Barnathan, Elliot S; Susulic, Vedrana S; Silkoff, Philip E; Sterk, Peter J; Baribaud, Frédéric

    2016-12-15

    Asthma is a disease of varying severity and differing disease mechanisms. To date, studies aimed at stratifying asthma into clinically useful phenotypes have produced a number of phenotypes that have yet to be assessed for stability and to be validated in independent cohorts. The aim of this study was to define and validate, for the first time ever, clinically driven asthma phenotypes using two independent, severe asthma cohorts: ADEPT and U-BIOPRED. Fuzzy partition-around-medoid clustering was performed on pre-specified data from the ADEPT participants (n = 156) and independently on data from a subset of U-BIOPRED asthma participants (n = 82) for whom the same variables were available. Models for cluster classification probabilities were derived and applied to the 12-month longitudinal ADEPT data and to a larger subset of the U-BIOPRED asthma dataset (n = 397). High and low type-2 inflammation phenotypes were defined as high or low Th2 activity, indicated by endobronchial biopsies gene expression changes downstream of IL-4 or IL-13. Four phenotypes were identified in the ADEPT (training) cohort, with distinct clinical and biomarker profiles. Phenotype 1 was "mild, good lung function, early onset", with a low-inflammatory, predominantly Type-2, phenotype. Phenotype 2 had a "moderate, hyper-responsive, eosinophilic" phenotype, with moderate asthma control, mild airflow obstruction and predominant Type-2 inflammation. Phenotype 3 had a "mixed severity, predominantly fixed obstructive, non-eosinophilic and neutrophilic" phenotype, with moderate asthma control and low Type-2 inflammation. Phenotype 4 had a "severe uncontrolled, severe reversible obstruction, mixed granulocytic" phenotype, with moderate Type-2 inflammation. These phenotypes had good longitudinal stability in the ADEPT cohort. They were reproduced and demonstrated high classification probability in two subsets of the U-BIOPRED asthma cohort. Focusing on the biology of the four clinical independently-validated easy-to-assess ADEPT asthma phenotypes will help understanding the unmet need and will aid in developing tailored therapies. NCT01274507 (ADEPT), registered October 28, 2010 and NCT01982162 (U-BIOPRED), registered October 30, 2013.

  5. Use of Binary Partition Tree and energy minimization for object-based classification of urban land cover

    NASA Astrophysics Data System (ADS)

    Li, Mengmeng; Bijker, Wietske; Stein, Alfred

    2015-04-01

    Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.

  6. Pet fur color and texture classification

    NASA Astrophysics Data System (ADS)

    Yen, Jonathan; Mukherjee, Debarghar; Lim, SukHwan; Tretter, Daniel

    2007-01-01

    Object segmentation is important in image analysis for imaging tasks such as image rendering and image retrieval. Pet owners have been known to be quite vocal about how important it is to render their pets perfectly. We present here an algorithm for pet (mammal) fur color classification and an algorithm for pet (animal) fur texture classification. Per fur color classification can be applied as a necessary condition for identifying the regions in an image that may contain pets much like the skin tone classification for human flesh detection. As a result of the evolution, fur coloration of all mammals is caused by a natural organic pigment called Melanin and Melanin has only very limited color ranges. We have conducted a statistical analysis and concluded that mammal fur colors can be only in levels of gray or in two colors after the proper color quantization. This pet fur color classification algorithm has been applied for peteye detection. We also present here an algorithm for animal fur texture classification using the recently developed multi-resolution directional sub-band Contourlet transform. The experimental results are very promising as these transforms can identify regions of an image that may contain fur of mammals, scale of reptiles and feather of birds, etc. Combining the color and texture classification, one can have a set of strong classifiers for identifying possible animals in an image.

  7. Exploring the impact of wavelet-based denoising in the classification of remote sensing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco

    2016-10-01

    The classification of remote sensing hyperspectral images for land cover applications is a very intensive topic. In the case of supervised classification, Support Vector Machines (SVMs) play a dominant role. Recently, the Extreme Learning Machine algorithm (ELM) has been extensively used. The classification scheme previously published by the authors, and called WT-EMP, introduces spatial information in the classification process by means of an Extended Morphological Profile (EMP) that is created from features extracted by wavelets. In addition, the hyperspectral image is denoised in the 2-D spatial domain, also using wavelets and it is joined to the EMP via a stacked vector. In this paper, the scheme is improved achieving two goals. The first one is to reduce the classification time while preserving the accuracy of the classification by using ELM instead of SVM. The second one is to improve the accuracy results by performing not only a 2-D denoising for every spectral band, but also a previous additional 1-D spectral signature denoising applied to each pixel vector of the image. For each denoising the image is transformed by applying a 1-D or 2-D wavelet transform, and then a NeighShrink thresholding is applied. Improvements in terms of classification accuracy are obtained, especially for images with close regions in the classification reference map, because in these cases the accuracy of the classification in the edges between classes is more relevant.

  8. A new tool for supervised classification of satellite images available on web servers: Google Maps as a case study

    NASA Astrophysics Data System (ADS)

    García-Flores, Agustín.; Paz-Gallardo, Abel; Plaza, Antonio; Li, Jun

    2016-10-01

    This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.

  9. Diagnostic discrepancies in retinopathy of prematurity classification

    PubMed Central

    Campbell, J. Peter; Ryan, Michael C.; Lore, Emily; Tian, Peng; Ostmo, Susan; Jonas, Karyn; Chan, R.V. Paul; Chiang, Michael F.

    2016-01-01

    Objective To identify the most common areas for discrepancy in retinopathy of prematurity (ROP) classification between experts. Design Prospective cohort study. Subjects, Participants, and/or Controls 281 infants were identified as part of a multi-center, prospective, ROP cohort study from 7 participating centers. Each site had participating ophthalmologists who provided the clinical classification after routine examination using binocular indirect ophthalmoscopy (BIO), and obtained wide-angle retinal images, which were independently classified by two study experts. Methods Wide-angle retinal images (RetCam; Clarity Medical Systems, Pleasanton, CA) were obtained from study subjects, and two experts evaluated each image using a secure web-based module. Image-based classifications for zone, stage, plus disease, overall disease category (no ROP, mild ROP, Type II or pre-plus, and Type I) were compared between the two experts, and to the clinical classification obtained by BIO. Main Outcome Measures Inter-expert image-based agreement and image-based vs. ophthalmoscopic diagnostic agreement using absolute agreement and weighted kappa statistic. Results 1553 study eye examinations from 281 infants were included in the study. Experts disagreed on the stage classification in 620/1553 (40%) of comparisons, plus disease classification (including pre-plus) in 287/1553 (18%), zone in 117/1553 (8%), and overall ROP category in 618/1553 (40%). However, agreement for presence vs. absence of type 1 disease was >95%. There were no differences between image-based and clinical classification except for zone III disease. Conclusions The most common area of discrepancy in ROP classification is stage, although inter-expert agreement for clinically-significant disease such as presence vs. absence of type 1 and type 2 disease is high. There were no differences between image-based grading and the clinical exam in the ability to detect clinically-significant disease. This study provides additional evidence that image-based classification of ROP reliably detects clinically significant levels of ROP with high accuracy compared to the clinical exam. PMID:27238376

  10. Inter- and intraspecific diversity in Cistus L. (Cistaceae) seeds, analysed with computer vision techniques.

    PubMed

    Lo Bianco, M; Grillo, O; Cañadas, E; Venora, G; Bacchetta, G

    2017-03-01

    This work aims to discriminate among different species of the genus Cistus, using seed parameters and following the scientific plant names included as accepted in The Plant List. Also, the intraspecific phenotypic differentiation of C. creticus, through comparison with three subspecies (C. creticus subsp. creticus, C. c. subsp. eriocephalus and C. c. subsp. corsicus), as well as the interpopulation variability among five C. creticus subsp. eriocephalus populations was evaluated. Seed mean weight and 137 morphocolorimetric quantitative variables, describing shape, size, colour and textural seed traits, were measured using image analysis techniques. Measured data were analysed applying step-wise linear discriminant analysis. An overall cross-validated classification performance of 80.6% was recorded at species level. With regard to C. creticus, as case study, percentages of correct discrimination of 96.7% and 99.6% were achieved at intraspecific and interpopulation levels, respectively. In this classification model, the relevance of the colorimetric and textural descriptive features was highlighted, as well as the seed mean weight, which was the most discriminant feature at specific and intraspecific level. These achievements proved the ability of the image analysis system as highly diagnostic for systematic purposes and confirm that seeds in the genus Cistus have important diagnostic value. © 2016 German Botanical Society and The Royal Botanical Society of the Netherlands.

  11. Remote Sensing Image Classification Applied to the First National Geographical Information Census of China

    NASA Astrophysics Data System (ADS)

    Yu, Xin; Wen, Zongyong; Zhu, Zhaorong; Xia, Qiang; Shun, Lan

    2016-06-01

    Image classification will still be a long way in the future, although it has gone almost half a century. In fact, researchers have gained many fruits in the image classification domain, but there is still a long distance between theory and practice. However, some new methods in the artificial intelligence domain will be absorbed into the image classification domain and draw on the strength of each to offset the weakness of the other, which will open up a new prospect. Usually, networks play the role of a high-level language, as is seen in Artificial Intelligence and statistics, because networks are used to build complex model from simple components. These years, Bayesian Networks, one of probabilistic networks, are a powerful data mining technique for handling uncertainty in complex domains. In this paper, we apply Tree Augmented Naive Bayesian Networks (TAN) to texture classification of High-resolution remote sensing images and put up a new method to construct the network topology structure in terms of training accuracy based on the training samples. Since 2013, China government has started the first national geographical information census project, which mainly interprets geographical information based on high-resolution remote sensing images. Therefore, this paper tries to apply Bayesian network to remote sensing image classification, in order to improve image interpretation in the first national geographical information census project. In the experiment, we choose some remote sensing images in Beijing. Experimental results demonstrate TAN outperform than Naive Bayesian Classifier (NBC) and Maximum Likelihood Classification Method (MLC) in the overall classification accuracy. In addition, the proposed method can reduce the workload of field workers and improve the work efficiency. Although it is time consuming, it will be an attractive and effective method for assisting office operation of image interpretation.

  12. Classification image analysis: estimation and statistical inference for two-alternative forced-choice experiments

    NASA Technical Reports Server (NTRS)

    Abbey, Craig K.; Eckstein, Miguel P.

    2002-01-01

    We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.

  13. Postprocessing classification images

    NASA Technical Reports Server (NTRS)

    Kan, E. P.

    1979-01-01

    Program cleans up remote-sensing maps. It can be used with existing image-processing software. Remapped images closely resemble familiar resource information maps and can replace or supplement classification images not postprocessed by this program.

  14. Integrated Analysis Platform: An Open-Source Information System for High-Throughput Plant Phenotyping1[C][W][OPEN

    PubMed Central

    Klukas, Christian; Chen, Dijun; Pape, Jean-Michel

    2014-01-01

    High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays ‘Fernandez’) plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. PMID:24760818

  15. Cupping artifact correction and automated classification for high-resolution dedicated breast CT images.

    PubMed

    Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei

    2012-10-01

    To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors' classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors' automatic classification and manual segmentation were 91.6% ± 2.0%. A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution.

  16. Cupping artifact correction and automated classification for high-resolution dedicated breast CT images

    PubMed Central

    Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei

    2012-01-01

    Purpose: To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. Methods: The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors’ classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. Results: The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors’ automatic classification and manual segmentation were 91.6% ± 2.0%. Conclusions: A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution. PMID:23039675

  17. Comprehensive 4-stage categorization of bicuspid aortic valve leaflet morphology by cardiac MRI in 386 patients.

    PubMed

    Murphy, I G; Collins, J; Powell, A; Markl, M; McCarthy, P; Malaisrie, S C; Carr, J C; Barker, A J

    2017-08-01

    Bicuspid aortic valve (BAV) disease is heterogeneous and related to valve dysfunction and aortopathy. Appropriate follow up and surveillance of patients with BAV may depend on correct phenotypic categorization. There are multiple classification schemes, however a need exists to comprehensively capture commissure fusion, leaflet asymmetry, and valve orifice orientation. Our aim was to develop a BAV classification scheme for use at MRI to ascertain the frequency of different phenotypes and the consistency of BAV classification. The BAV classification scheme builds on the Sievers surgical BAV classification, adding valve orifice orientation, partial leaflet fusion and leaflet asymmetry. A single observer successfully applied this classification to 386 of 398 Cardiac MRI studies. Repeatability of categorization was ascertained with intraobserver and interobserver kappa scores. Sensitivity and specificity of MRI findings was determined from operative reports, where available. Fusion of the right and left leaflets accounted for over half of all cases. Partial leaflet fusion was seen in 46% of patients. Good interobserver agreement was seen for orientation of the valve opening (κ = 0.90), type (κ = 0.72) and presence of partial fusion (κ = 0.83, p < 0.0001). Retrospective review of operative notes showed sensitivity and specificity for orientation (90, 93%) and for Sievers type (73, 87%). The proposed BAV classification schema was assessed by MRI for its reliability to classify valve morphology in addition to illustrating the wide heterogeneity of leaflet size, orifice orientation, and commissural fusion. The classification may be helpful in further understanding the relationship between valve morphology, flow derangement and aortopathy.

  18. In the (sub)tropics allergic rhinitis and its impact on asthma classification of allergic rhinitis is more useful than perennial-seasonal classification.

    PubMed

    Larenas-Linnemann, Désirée; Michels, Alexandra; Dinger, Hanna; Arias-Cruz, Alfredo; Ambriz Moreno, Marichuy; Bedolla Barajas, Martin; Javier, Ruth Cerino; Cid Del Prado, Maria de la Luz; Cruz Moreno, Manuel Alejandro; Vergara, Laura Diego; García Almaráz, Roberto; García-Cobas, Cecilia Y; Garcia Imperial, Daniel Alberto; Muñoz, Rosa Garcia; Hernandez Colín, Dante; Linares Zapien, Francisco Javier; Luna Pech, Jorge Agustín; Matta Campos, Juan Jose; Martinez Jimenez, Norma; Avalos, Miguel Medina; Medina Hernandez, Alejandra; Maldonado, Albero Monteverde; López, Doris Nereida; Pizano Nazara, Luis Julian; Sanchez, Emanuel Ramirez; Ramos López, José Domingo; Rodriguez-Pérez, Noel; Rodriguez Ortiz, Pablo G; Shah-Hosseini, Kijawasch; Mösges, Ralph

    2014-01-01

    Two different allergic rhinitis (AR) symptom phenotype classifications exist. Treatment recommendations are based on intermittent-persistent (INT-PER) cataloging, but clinical trials still use the former seasonal AR-perennial AR (SAR-PAR) classification. This study was designed to describe how INT-PER, mild-moderate/severe and SAR-PAR of patients seen by allergists are distributed over the different climate zones in a (sub)tropical country and how these phenotypes relate to allergen sensitization patterns. Six climate zones throughout Mexico were determined, based on National Geographic Institute (Instituto Nacional de Estadística y Geografía) data. Subsequent AR patients (2-68 years old) underwent a blinded, standardized skin-prick test and filled out a validated questionnaire phenotyping AR. Five hundred twenty-nine subjects participated in this study. In the tropical zone with 87% house-dust mite sensitization, INT (80.9%; p < 0.001) and PAR (91%; p = 0.04) were more frequent than in the subtropics. In the central high-pollen areas, there was less moderate/severe AR (65.5%; p < 0.005). Frequency of comorbid asthma showed a clear north-south gradient, from 25% in the dry north to 59% in the tropics (p < 0.005). No differences exist in AR cataloging among patients with different sensitization patterns, with two minor exceptions (more PER in tree sensitized and more PAR in mold positives; p < 0.05). In a (sub)tropical country the SAR-PAR classification seems of limited value and bears poor relation with the INT-PER classification. INT is more frequent in the tropical zone. Because PER has been shown to relate to AR severity, clinical trials should select patients based on INT-PER combined with the severity cataloging because these make for a better treatment guide than SAR-PAR.

  19. "Relative CIR": an image enhancement and visualization technique

    USGS Publications Warehouse

    Fleming, Michael D.

    1993-01-01

    Many techniques exist to spectrally and spatially enhance digital multispectral scanner data. One technique enhances an image while keeping the colors as they would appear in a color-infrared (CIR) image. This "relative CIR" technique generates an image that is both spectrally and spatially enhanced, while displaying a maximum range of colors. The technique enables an interpreter to visualize either spectral or land cover classes by their relative CIR characteristics. A relative CIR image is generated by developed spectral statistics for each class in the classifications and then, using a nonparametric approach for spectral enhancement, the means of the classes for each band are ranked. A 3 by 3 pixel smoothing filter is applied to the classification for spatial enhancement and the classes are mapped to the representative rank for each band. Practical applications of the technique include displaying an image classification product as a CIR image that was not derived directly from a spectral image, visualizing how a land cover classification would look as a CIR image, and displaying a spectral classification or intermediate product that will be used to label spectral classes.

  20. Hyperspectral Image Enhancement and Mixture Deep-Learning Classification of Corneal Epithelium Injuries.

    PubMed

    Noor, Siti Salwa Md; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang

    2017-11-16

    In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability.

  1. Optimizing research in symptomatic uterine fibroids with development of a computable phenotype for use with electronic health records.

    PubMed

    Hoffman, Sarah R; Vines, Anissa I; Halladay, Jacqueline R; Pfaff, Emily; Schiff, Lauren; Westreich, Daniel; Sundaresan, Aditi; Johnson, La-Shell; Nicholson, Wanda K

    2018-06-01

    Women with symptomatic uterine fibroids can report a myriad of symptoms, including pain, bleeding, infertility, and psychosocial sequelae. Optimizing fibroid research requires the ability to enroll populations of women with image-confirmed symptomatic uterine fibroids. Our objective was to develop an electronic health record-based algorithm to identify women with symptomatic uterine fibroids for a comparative effectiveness study of medical or surgical treatments on quality-of-life measures. Using an iterative process and text-mining techniques, an effective computable phenotype algorithm, composed of demographics, and clinical and laboratory characteristics, was developed with reasonable performance. Such algorithms provide a feasible, efficient way to identify populations of women with symptomatic uterine fibroids for the conduct of large traditional or pragmatic trials and observational comparative effectiveness studies. Symptomatic uterine fibroids, due to menorrhagia, pelvic pain, bulk symptoms, or infertility, are a source of substantial morbidity for reproductive-age women. Comparing Treatment Options for Uterine Fibroids is a multisite registry study to compare the effectiveness of hormonal or surgical fibroid treatments on women's perceptions of their quality of life. Electronic health record-based algorithms are able to identify large numbers of women with fibroids, but additional work is needed to develop electronic health record algorithms that can identify women with symptomatic fibroids to optimize fibroid research. We sought to develop an efficient electronic health record-based algorithm that can identify women with symptomatic uterine fibroids in a large health care system for recruitment into large-scale observational and interventional research in fibroid management. We developed and assessed the accuracy of 3 algorithms to identify patients with symptomatic fibroids using an iterative approach. The data source was the Carolina Data Warehouse for Health, a repository for the health system's electronic health record data. In addition to International Classification of Diseases, Ninth Revision diagnosis and procedure codes and clinical characteristics, text data-mining software was used to derive information from imaging reports to confirm the presence of uterine fibroids. Results of each algorithm were compared with expert manual review to calculate the positive predictive values for each algorithm. Algorithm 1 was composed of the following criteria: (1) age 18-54 years; (2) either ≥1 International Classification of Diseases, Ninth Revision diagnosis codes for uterine fibroids or mention of fibroids using text-mined key words in imaging records or documents; and (3) no International Classification of Diseases, Ninth Revision or Current Procedural Terminology codes for hysterectomy and no reported history of hysterectomy. The positive predictive value was 47% (95% confidence interval 39-56%). Algorithm 2 required ≥2 International Classification of Diseases, Ninth Revision diagnosis codes for fibroids and positive text-mined key words and had a positive predictive value of 65% (95% confidence interval 50-79%). In algorithm 3, further refinements included ≥2 International Classification of Diseases, Ninth Revision diagnosis codes for fibroids on separate outpatient visit dates, the exclusion of women who had a positive pregnancy test within 3 months of their fibroid-related visit, and exclusion of incidentally detected fibroids during prenatal or emergency department visits. Algorithm 3 achieved a positive predictive value of 76% (95% confidence interval 71-81%). An electronic health record-based algorithm is capable of identifying cases of symptomatic uterine fibroids with moderate positive predictive value and may be an efficient approach for large-scale study recruitment. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Training sample selection based on self-training for liver cirrhosis classification using ultrasound images

    NASA Astrophysics Data System (ADS)

    Fujita, Yusuke; Mitani, Yoshihiro; Hamamoto, Yoshihiko; Segawa, Makoto; Terai, Shuji; Sakaida, Isao

    2017-03-01

    Ultrasound imaging is a popular and non-invasive tool used in the diagnoses of liver disease. Cirrhosis is a chronic liver disease and it can advance to liver cancer. Early detection and appropriate treatment are crucial to prevent liver cancer. However, ultrasound image analysis is very challenging, because of the low signal-to-noise ratio of ultrasound images. To achieve the higher classification performance, selection of training regions of interest (ROIs) is very important that effect to classification accuracy. The purpose of our study is cirrhosis detection with high accuracy using liver ultrasound images. In our previous works, training ROI selection by MILBoost and multiple-ROI classification based on the product rule had been proposed, to achieve high classification performance. In this article, we propose self-training method to select training ROIs effectively. Evaluation experiments were performed to evaluate effect of self-training, using manually selected ROIs and also automatically selected ROIs. Experimental results show that self-training for manually selected ROIs achieved higher classification performance than other approaches, including our conventional methods. The manually ROI definition and sample selection are important to improve classification accuracy in cirrhosis detection using ultrasound images.

  3. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    NASA Astrophysics Data System (ADS)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  4. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  5. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    PubMed Central

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  6. Image search engine with selective filtering and feature-element-based classification

    NASA Astrophysics Data System (ADS)

    Li, Qing; Zhang, Yujin; Dai, Shengyang

    2001-12-01

    With the growth of Internet and storage capability in recent years, image has become a widespread information format in World Wide Web. However, it has become increasingly harder to search for images of interest, and effective image search engine for the WWW needs to be developed. We propose in this paper a selective filtering process and a novel approach for image classification based on feature element in the image search engine we developed for the WWW. First a selective filtering process is embedded in a general web crawler to filter out the meaningless images with GIF format. Two parameters that can be obtained easily are used in the filtering process. Our classification approach first extract feature elements from images instead of feature vectors. Compared with feature vectors, feature elements can better capture visual meanings of the image according to subjective perception of human beings. Different from traditional image classification method, our classification approach based on feature element doesn't calculate the distance between two vectors in the feature space, while trying to find associations between feature element and class attribute of the image. Experiments are presented to show the efficiency of the proposed approach.

  7. Polarimetric SAR image classification based on discriminative dictionary learning model

    NASA Astrophysics Data System (ADS)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  8. Random forests-based differential analysis of gene sets for gene expression data.

    PubMed

    Hsueh, Huey-Miin; Zhou, Da-Wei; Tsai, Chen-An

    2013-04-10

    In DNA microarray studies, gene-set analysis (GSA) has become the focus of gene expression data analysis. GSA utilizes the gene expression profiles of functionally related gene sets in Gene Ontology (GO) categories or priori-defined biological classes to assess the significance of gene sets associated with clinical outcomes or phenotypes. Many statistical approaches have been proposed to determine whether such functionally related gene sets express differentially (enrichment and/or deletion) in variations of phenotypes. However, little attention has been given to the discriminatory power of gene sets and classification of patients. In this study, we propose a method of gene set analysis, in which gene sets are used to develop classifications of patients based on the Random Forest (RF) algorithm. The corresponding empirical p-value of an observed out-of-bag (OOB) error rate of the classifier is introduced to identify differentially expressed gene sets using an adequate resampling method. In addition, we discuss the impacts and correlations of genes within each gene set based on the measures of variable importance in the RF algorithm. Significant classifications are reported and visualized together with the underlying gene sets and their contribution to the phenotypes of interest. Numerical studies using both synthesized data and a series of publicly available gene expression data sets are conducted to evaluate the performance of the proposed methods. Compared with other hypothesis testing approaches, our proposed methods are reliable and successful in identifying enriched gene sets and in discovering the contributions of genes within a gene set. The classification results of identified gene sets can provide an valuable alternative to gene set testing to reveal the unknown, biologically relevant classes of samples or patients. In summary, our proposed method allows one to simultaneously assess the discriminatory ability of gene sets and the importance of genes for interpretation of data in complex biological systems. The classifications of biologically defined gene sets can reveal the underlying interactions of gene sets associated with the phenotypes, and provide an insightful complement to conventional gene set analyses. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Evaluation of image deblurring methods via a classification metric

    NASA Astrophysics Data System (ADS)

    Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo

    2012-09-01

    The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.

  10. Semantic classification of business images

    NASA Astrophysics Data System (ADS)

    Erol, Berna; Hull, Jonathan J.

    2006-01-01

    Digital cameras are becoming increasingly common for capturing information in business settings. In this paper, we describe a novel method for classifying images into the following semantic classes: document, whiteboard, business card, slide, and regular images. Our method is based on combining low-level image features, such as text color, layout, and handwriting features with high-level OCR output analysis. Several Support Vector Machine Classifiers are combined for multi-class classification of input images. The system yields 95% accuracy in classification.

  11. Development of a classification method for a crack on a pavement surface images using machine learning

    NASA Astrophysics Data System (ADS)

    Hizukuri, Akiyoshi; Nagata, Takeshi

    2017-03-01

    The purpose of this study is to develop a classification method for a crack on a pavement surface image using machine learning to reduce a maintenance fee. Our database consists of 3500 pavement surface images. This includes 800 crack and 2700 normal pavement surface images. The pavement surface images first are decomposed into several sub-images using a discrete wavelet transform (DWT) decomposition. We then calculate the wavelet sub-band histogram from each several sub-images at each level. The support vector machine (SVM) with computed wavelet sub-band histogram is employed for distinguishing between a crack and normal pavement surface images. The accuracies of the proposed classification method are 85.3% for crack and 84.4% for normal pavement images. The proposed classification method achieved high performance. Therefore, the proposed method would be useful in maintenance inspection.

  12. Automated simultaneous multiple feature classification of MTI data

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Theiler, James P.; Balick, Lee K.; Pope, Paul A.; Szymanski, John J.; Perkins, Simon J.; Porter, Reid B.; Brumby, Steven P.; Bloch, Jeffrey J.; David, Nancy A.; Galassi, Mark C.

    2002-08-01

    Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.

  13. A Classification of Remote Sensing Image Based on Improved Compound Kernels of Svm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianing; Gao, Wanlin; Liu, Zili; Mou, Guifen; Lu, Lin; Yu, Lina

    The accuracy of RS classification based on SVM which is developed from statistical learning theory is high under small number of train samples, which results in satisfaction of classification on RS using SVM methods. The traditional RS classification method combines visual interpretation with computer classification. The accuracy of the RS classification, however, is improved a lot based on SVM method, because it saves much labor and time which is used to interpret images and collect training samples. Kernel functions play an important part in the SVM algorithm. It uses improved compound kernel function and therefore has a higher accuracy of classification on RS images. Moreover, compound kernel improves the generalization and learning ability of the kernel.

  14. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  15. CP-CHARM: segmentation-free image classification made accessible.

    PubMed

    Uhlmann, Virginie; Singh, Shantanu; Carpenter, Anne E

    2016-01-27

    Automated classification using machine learning often relies on features derived from segmenting individual objects, which can be difficult to automate. WND-CHARM is a previously developed classification algorithm in which features are computed on the whole image, thereby avoiding the need for segmentation. The algorithm obtained encouraging results but requires considerable computational expertise to execute. Furthermore, some benchmark sets have been shown to be subject to confounding artifacts that overestimate classification accuracy. We developed CP-CHARM, a user-friendly image-based classification algorithm inspired by WND-CHARM in (i) its ability to capture a wide variety of morphological aspects of the image, and (ii) the absence of requirement for segmentation. In order to make such an image-based classification method easily accessible to the biological research community, CP-CHARM relies on the widely-used open-source image analysis software CellProfiler for feature extraction. To validate our method, we reproduced WND-CHARM's results and ensured that CP-CHARM obtained comparable performance. We then successfully applied our approach on cell-based assay data and on tissue images. We designed these new training and test sets to reduce the effect of batch-related artifacts. The proposed method preserves the strengths of WND-CHARM - it extracts a wide variety of morphological features directly on whole images thereby avoiding the need for cell segmentation, but additionally, it makes the methods easily accessible for researchers without computational expertise by implementing them as a CellProfiler pipeline. It has been demonstrated to perform well on a wide range of bioimage classification problems, including on new datasets that have been carefully selected and annotated to minimize batch effects. This provides for the first time a realistic and reliable assessment of the whole image classification strategy.

  16. Effective Sequential Classifier Training for SVM-Based Multitemporal Remote Sensing Image Classification

    NASA Astrophysics Data System (ADS)

    Guo, Yiqing; Jia, Xiuping; Paull, David

    2018-06-01

    The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.

  17. Classification of malignant and benign liver tumors using a radiomics approach

    NASA Astrophysics Data System (ADS)

    Starmans, Martijn P. A.; Miclea, Razvan L.; van der Voort, Sebastian R.; Niessen, Wiro J.; Thomeer, Maarten G.; Klein, Stefan

    2018-03-01

    Correct diagnosis of the liver tumor phenotype is crucial for treatment planning, especially the distinction between malignant and benign lesions. Clinical practice includes manual scoring of the tumors on Magnetic Resonance (MR) images by a radiologist. As this is challenging and subjective, it is often followed by a biopsy. In this study, we propose a radiomics approach as an objective and non-invasive alternative for distinguishing between malignant and benign phenotypes. T2-weighted (T2w) MR sequences of 119 patients from multiple centers were collected. We developed an efficient semi-automatic segmentation method, which was used by a radiologist to delineate the tumors. Within these regions, features quantifying tumor shape, intensity, texture, heterogeneity and orientation were extracted. Patient characteristics and semantic features were added for a total of 424 features. Classification was performed using Support Vector Machines (SVMs). The performance was evaluated using internal random-split cross-validation. On the training set within each iteration, feature selection and hyperparameter optimization were performed. To this end, another cross validation was performed by splitting the training sets in training and validation parts. The optimal settings were evaluated on the independent test sets. Manual scoring by a radiologist was also performed. The radiomics approach resulted in 95% confidence intervals of the AUC of [0.75, 0.92], specificity [0.76, 0.96] and sensitivity [0.52, 0.82]. These approach the performance of the radiologist, which were an AUC of 0.93, specificity 0.70 and sensitivity 0.93. Hence, radiomics has the potential to predict the liver tumor benignity in an objective and non-invasive manner.

  18. Sporadic periventricular nodular heterotopia: Classification, phenotype and correlation with Filamin A mutations.

    PubMed

    Liu, Wenyu; Yan, Bo; An, Dongmei; Xiao, Jiahe; Hu, Fayun; Zhou, Dong

    2017-07-01

    The purpose of this study was to better delineate the clinical spectrum of periventricular nodular heterotopia (PNH) in a large patient population after long term follow up. Specifically, this study aimed to relate PNH subtypes to clinical or epileptic outcomes, epileptic discharges and underlying Filamin A (FLNA) mutations by analyzing anatomical features. The study included 100 patients with radiologically confirmed nodular heterotopia. Patients' FLNA gene sequences and medical records were analyzed. Two-sided Chi-square test and Fisher's exact t-test were used to assess associations between the distribution of PNHs and specific clinical features. Based on imaging data, patients were subdivided into three groups: (a) classical (bilateral frontal and body, n=41 patients), (b) bilateral asymmetrical or posterior (n=16) and (c) unilateral heterotopia (n=43). Most patients with classical heterotopia were females (P=0.033) and were likely to have arachnoid cysts (P=0.025) and cardiac abnormalities (P=0.041), but were mostly seizure-free. Additionally, hippocampal abnormalities (P=0.022), neurological deficits (P=0.028) and cerebellar abnormalities (P=0.005) were more common in patients with bilateral asymmetrical heterotopia. Patients with unilateral heterotopia were prone to develop refractory epilepsy (P=0.041). FLNA mutations were identified in 8 patients. Each group's distinctive genetic mutations, epileptic discharge patterns and overall clinical outcomes confirm that the proposed classification system is reliable. These findings could not only be an indicator of a more severe morphological and clinical phenotype, but could also have clinical implications with respect to the epilepsy management and optimization of therapeutic options. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Inconsistency of phenotypic and genomic characteristics of Campylobacter fetus subspecies requires re-evaluation of current diagnostics

    USDA-ARS?s Scientific Manuscript database

    Classification of the Campylobacter fetus subspecies fetus and venerealis was first described in 1959 and was based on the source of isolation (intestinal vs genital) and the ability of the strains to proliferate in cows. Two phenotypic assays (1% glycine tolerance and H2S production) were described...

  20. A classification model of Hyperion image base on SAM combined decision tree

    NASA Astrophysics Data System (ADS)

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model heightens 9.9%.

  1. A multiple-point spatially weighted k-NN method for object-based classification

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.

    2016-10-01

    Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.

  2. A probabilistic approach to segmentation and classification of neoplasia in uterine cervix images using color and geometric features

    NASA Astrophysics Data System (ADS)

    Srinivasan, Yeshwanth; Hernes, Dana; Tulpule, Bhakti; Yang, Shuyu; Guo, Jiangling; Mitra, Sunanda; Yagneswaran, Sriraja; Nutter, Brian; Jeronimo, Jose; Phillips, Benny; Long, Rodney; Ferris, Daron

    2005-04-01

    Automated segmentation and classification of diagnostic markers in medical imagery are challenging tasks. Numerous algorithms for segmentation and classification based on statistical approaches of varying complexity are found in the literature. However, the design of an efficient and automated algorithm for precise classification of desired diagnostic markers is extremely image-specific. The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating an archive of 60,000 digitized color images of the uterine cervix. NLM is developing tools for the analysis and dissemination of these images over the Web for the study of visual features correlated with precancerous neoplasia and cancer. To enable indexing of images of the cervix, it is essential to develop algorithms for the segmentation of regions of interest, such as acetowhitened regions, and automatic identification and classification of regions exhibiting mosaicism and punctation. Success of such algorithms depends, primarily, on the selection of relevant features representing the region of interest. We present color and geometric features based statistical classification and segmentation algorithms yielding excellent identification of the regions of interest. The distinct classification of the mosaic regions from the non-mosaic ones has been obtained by clustering multiple geometric and color features of the segmented sections using various morphological and statistical approaches. Such automated classification methodologies will facilitate content-based image retrieval from the digital archive of uterine cervix and have the potential of developing an image based screening tool for cervical cancer.

  3. Application of Convolutional Neural Network in Classification of High Resolution Agricultural Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Yao, C.; Zhang, Y.; Zhang, Y.; Liu, H.

    2017-09-01

    With the rapid development of Precision Agriculture (PA) promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN). For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.

  4. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio

    2008-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.

  5. A Comparative Study of Landsat TM and SPOT HRG Images for Vegetation Classification in the Brazilian Amazon

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio

    2009-01-01

    Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716

  6. Local Subspace Classifier with Transform-Invariance for Image Classification

    NASA Astrophysics Data System (ADS)

    Hotta, Seiji

    A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.

  7. Online image classification under monotonic decision boundary constraint

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Allebach, Jan; Wagner, Jerry; Pitta, Brandi; Larson, David; Guo, Yandong

    2015-01-01

    Image classification is a prerequisite for copy quality enhancement in all-in-one (AIO) device that comprises a printer and scanner, and which can be used to scan, copy and print. Different processing pipelines are provided in an AIO printer. Each of the processing pipelines is designed specifically for one type of input image to achieve the optimal output image quality. A typical approach to this problem is to apply Support Vector Machine to classify the input image and feed it to its corresponding processing pipeline. The online training SVM can help users to improve the performance of classification as input images accumulate. At the same time, we want to make quick decision on the input image to speed up the classification which means sometimes the AIO device does not need to scan the entire image to make a final decision. These two constraints, online SVM and quick decision, raise questions regarding: 1) what features are suitable for classification; 2) how we should control the decision boundary in online SVM training. This paper will discuss the compatibility of online SVM and quick decision capability.

  8. Classification of skin cancer images using local binary pattern and SVM classifier

    NASA Astrophysics Data System (ADS)

    Adjed, Faouzi; Faye, Ibrahima; Ababsa, Fakhreddine; Gardezi, Syed Jamal; Dass, Sarat Chandra

    2016-11-01

    In this paper, a classification method for melanoma and non-melanoma skin cancer images has been presented using the local binary patterns (LBP). The LBP computes the local texture information from the skin cancer images, which is later used to compute some statistical features that have capability to discriminate the melanoma and non-melanoma skin tissues. Support vector machine (SVM) is applied on the feature matrix for classification into two skin image classes (malignant and benign). The method achieves good classification accuracy of 76.1% with sensitivity of 75.6% and specificity of 76.7%.

  9. The Clinical Next-Generation Sequencing Database: A Tool for the Unified Management of Clinical Information and Genetic Variants to Accelerate Variant Pathogenicity Classification.

    PubMed

    Nishio, Shin-Ya; Usami, Shin-Ichi

    2017-03-01

    Recent advances in next-generation sequencing (NGS) have given rise to new challenges due to the difficulties in variant pathogenicity interpretation and large dataset management, including many kinds of public population databases as well as public or commercial disease-specific databases. Here, we report a new database development tool, named the "Clinical NGS Database," for improving clinical NGS workflow through the unified management of variant information and clinical information. This database software offers a two-feature approach to variant pathogenicity classification. The first of these approaches is a phenotype similarity-based approach. This database allows the easy comparison of the detailed phenotype of each patient with the average phenotype of the same gene mutation at the variant or gene level. It is also possible to browse patients with the same gene mutation quickly. The other approach is a statistical approach to variant pathogenicity classification based on the use of the odds ratio for comparisons between the case and the control for each inheritance mode (families with apparently autosomal dominant inheritance vs. control, and families with apparently autosomal recessive inheritance vs. control). A number of case studies are also presented to illustrate the utility of this database. © 2016 The Authors. **Human Mutation published by Wiley Periodicals, Inc.

  10. Glioma CpG island methylator phenotype (G-CIMP): biological and clinical implications.

    PubMed

    Malta, Tathiane M; de Souza, Camila F; Sabedot, Thais S; Silva, Tiago C; Mosella, Maritza S; Kalkanis, Steven N; Snyder, James; Castro, Ana Valeria B; Noushmehr, Houtan

    2018-04-09

    Gliomas are a heterogeneous group of brain tumors with distinct biological and clinical properties. Despite advances in surgical techniques and clinical regimens, treatment of high-grade glioma remains challenging and carries dismal rates of therapeutic success and overall survival. Challenges include the molecular complexity of gliomas, as well as inconsistencies in histopathological grading, resulting in an inaccurate prediction of disease progression and failure in the use of standard therapy. The updated 2016 World Health Organization (WHO) classification of tumors of the central nervous system reflects a refinement of tumor diagnostics by integrating the genotypic and phenotypic features, thereby narrowing the defined subgroups. The new classification recommends molecular diagnosis of isocitrate dehydrogenase (IDH) mutational status in gliomas. IDH-mutant gliomas manifest the cytosine-phosphate-guanine (CpG) island methylator phenotype (G-CIMP). Notably, the recent identification of clinically relevant subsets of G-CIMP tumors (G-CIMP-high and G-CIMP-low) provides a further refinement in glioma classification that is independent of grade and histology. This scheme may be useful for predicting patient outcome and may be translated into effective therapeutic strategies tailored to each patient. In this review, we highlight the evolution of our understanding of the G-CIMP subsets and how recent advances in characterizing the genome and epigenome of gliomas may influence future basic and translational research.

  11. Robust diagnosis of non-Hodgkin lymphoma phenotypes validated on gene expression data from different laboratories.

    PubMed

    Bhanot, Gyan; Alexe, Gabriela; Levine, Arnold J; Stolovitzky, Gustavo

    2005-01-01

    A major challenge in cancer diagnosis from microarray data is the need for robust, accurate, classification models which are independent of the analysis techniques used and can combine data from different laboratories. We propose such a classification scheme originally developed for phenotype identification from mass spectrometry data. The method uses a robust multivariate gene selection procedure and combines the results of several machine learning tools trained on raw and pattern data to produce an accurate meta-classifier. We illustrate and validate our method by applying it to gene expression datasets: the oligonucleotide HuGeneFL microarray dataset of Shipp et al. (www.genome.wi.mit.du/MPR/lymphoma) and the Hu95Av2 Affymetrix dataset (DallaFavera's laboratory, Columbia University). Our pattern-based meta-classification technique achieves higher predictive accuracies than each of the individual classifiers , is robust against data perturbations and provides subsets of related predictive genes. Our techniques predict that combinations of some genes in the p53 pathway are highly predictive of phenotype. In particular, we find that in 80% of DLBCL cases the mRNA level of at least one of the three genes p53, PLK1 and CDK2 is elevated, while in 80% of FL cases, the mRNA level of at most one of them is elevated.

  12. Emotional modelling and classification of a large-scale collection of scene images in a cluster environment

    PubMed Central

    Li, Yanfei; Tian, Yun

    2018-01-01

    The development of network technology and the popularization of image capturing devices have led to a rapid increase in the number of digital images available, and it is becoming increasingly difficult to identify a desired image from among the massive number of possible images. Images usually contain rich semantic information, and people usually understand images at a high semantic level. Therefore, achieving the ability to use advanced technology to identify the emotional semantics contained in images to enable emotional semantic image classification remains an urgent issue in various industries. To this end, this study proposes an improved OCC emotion model that integrates personality and mood factors for emotional modelling to describe the emotional semantic information contained in an image. The proposed classification system integrates the k-Nearest Neighbour (KNN) algorithm with the Support Vector Machine (SVM) algorithm. The MapReduce parallel programming model was used to adapt the KNN-SVM algorithm for parallel implementation in the Hadoop cluster environment, thereby achieving emotional semantic understanding for the classification of a massive collection of images. For training and testing, 70,000 scene images were randomly selected from the SUN Database. The experimental results indicate that users with different personalities show overall consistency in their emotional understanding of the same image. For a training sample size of 50,000, the classification accuracies for different emotional categories targeted at users with different personalities were approximately 95%, and the training time was only 1/5 of that required for the corresponding algorithm with a single-node architecture. Furthermore, the speedup of the system also showed a linearly increasing tendency. Thus, the experiments achieved a good classification effect and can lay a foundation for classification in terms of additional types of emotional image semantics, thereby demonstrating the practical significance of the proposed model. PMID:29320579

  13. Emotional modelling and classification of a large-scale collection of scene images in a cluster environment.

    PubMed

    Cao, Jianfang; Li, Yanfei; Tian, Yun

    2018-01-01

    The development of network technology and the popularization of image capturing devices have led to a rapid increase in the number of digital images available, and it is becoming increasingly difficult to identify a desired image from among the massive number of possible images. Images usually contain rich semantic information, and people usually understand images at a high semantic level. Therefore, achieving the ability to use advanced technology to identify the emotional semantics contained in images to enable emotional semantic image classification remains an urgent issue in various industries. To this end, this study proposes an improved OCC emotion model that integrates personality and mood factors for emotional modelling to describe the emotional semantic information contained in an image. The proposed classification system integrates the k-Nearest Neighbour (KNN) algorithm with the Support Vector Machine (SVM) algorithm. The MapReduce parallel programming model was used to adapt the KNN-SVM algorithm for parallel implementation in the Hadoop cluster environment, thereby achieving emotional semantic understanding for the classification of a massive collection of images. For training and testing, 70,000 scene images were randomly selected from the SUN Database. The experimental results indicate that users with different personalities show overall consistency in their emotional understanding of the same image. For a training sample size of 50,000, the classification accuracies for different emotional categories targeted at users with different personalities were approximately 95%, and the training time was only 1/5 of that required for the corresponding algorithm with a single-node architecture. Furthermore, the speedup of the system also showed a linearly increasing tendency. Thus, the experiments achieved a good classification effect and can lay a foundation for classification in terms of additional types of emotional image semantics, thereby demonstrating the practical significance of the proposed model.

  14. Hyperspectral image classification based on local binary patterns and PCANet

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  15. Hyperspectral imaging with wavelet transform for classification of colon tissue biopsy samples

    NASA Astrophysics Data System (ADS)

    Masood, Khalid

    2008-08-01

    Automatic classification of medical images is a part of our computerised medical imaging programme to support the pathologists in their diagnosis. Hyperspectral data has found its applications in medical imagery. Its usage is increasing significantly in biopsy analysis of medical images. In this paper, we present a histopathological analysis for the classification of colon biopsy samples into benign and malignant classes. The proposed study is based on comparison between 3D spectral/spatial analysis and 2D spatial analysis. Wavelet textural features in the wavelet domain are used in both these approaches for classification of colon biopsy samples. Experimental results indicate that the incorporation of wavelet textural features using a support vector machine, in 2D spatial analysis, achieve best classification accuracy.

  16. Satellite Image Classification of Building Damages Using Airborne and Satellite Image Samples in a Deep Learning Approach

    NASA Astrophysics Data System (ADS)

    Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G.

    2018-05-01

    The localization and detailed assessment of damaged buildings after a disastrous event is of utmost importance to guide response operations, recovery tasks or for insurance purposes. Several remote sensing platforms and sensors are currently used for the manual detection of building damages. However, there is an overall interest in the use of automated methods to perform this task, regardless of the used platform. Owing to its synoptic coverage and predictable availability, satellite imagery is currently used as input for the identification of building damages by the International Charter, as well as the Copernicus Emergency Management Service for the production of damage grading and reference maps. Recently proposed methods to perform image classification of building damages rely on convolutional neural networks (CNN). These are usually trained with only satellite image samples in a binary classification problem, however the number of samples derived from these images is often limited, affecting the quality of the classification results. The use of up/down-sampling image samples during the training of a CNN, has demonstrated to improve several image recognition tasks in remote sensing. However, it is currently unclear if this multi resolution information can also be captured from images with different spatial resolutions like satellite and airborne imagery (from both manned and unmanned platforms). In this paper, a CNN framework using residual connections and dilated convolutions is used considering both manned and unmanned aerial image samples to perform the satellite image classification of building damages. Three network configurations, trained with multi-resolution image samples are compared against two benchmark networks where only satellite image samples are used. Combining feature maps generated from airborne and satellite image samples, and refining these using only the satellite image samples, improved nearly 4 % the overall satellite image classification of building damages.

  17. Classification of Tree Species in Overstorey Canopy of Subtropical Forest Using QuickBird Images.

    PubMed

    Lin, Chinsu; Popescu, Sorin C; Thomson, Gavin; Tsogt, Khongor; Chang, Chein-I

    2015-01-01

    This paper proposes a supervised classification scheme to identify 40 tree species (2 coniferous, 38 broadleaf) belonging to 22 families and 36 genera in high spatial resolution QuickBird multispectral images (HMS). Overall kappa coefficient (OKC) and species conditional kappa coefficients (SCKC) were used to evaluate classification performance in training samples and estimate accuracy and uncertainty in test samples. Baseline classification performance using HMS images and vegetation index (VI) images were evaluated with an OKC value of 0.58 and 0.48 respectively, but performance improved significantly (up to 0.99) when used in combination with an HMS spectral-spatial texture image (SpecTex). One of the 40 species had very high conditional kappa coefficient performance (SCKC ≥ 0.95) using 4-band HMS and 5-band VIs images, but, only five species had lower performance (0.68 ≤ SCKC ≤ 0.94) using the SpecTex images. When SpecTex images were combined with a Visible Atmospherically Resistant Index (VARI), there was a significant improvement in performance in the training samples. The same level of improvement could not be replicated in the test samples indicating that a high degree of uncertainty exists in species classification accuracy which may be due to individual tree crown density, leaf greenness (inter-canopy gaps), and noise in the background environment (intra-canopy gaps). These factors increase uncertainty in the spectral texture features and therefore represent potential problems when using pixel-based classification techniques for multi-species classification.

  18. Unsupervised feature learning for autonomous rock image classification

    NASA Astrophysics Data System (ADS)

    Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond

    2017-09-01

    Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.

  19. Towards precision medicine: from quantitative imaging to radiomics

    PubMed Central

    Acharya, U. Rajendra; Hagiwara, Yuki; Sudarshan, Vidya K.; Chan, Wai Yee; Ng, Kwan Hoong

    2018-01-01

    Radiology (imaging) and imaging-guided interventions, which provide multi-parametric morphologic and functional information, are playing an increasingly significant role in precision medicine. Radiologists are trained to understand the imaging phenotypes, transcribe those observations (phenotypes) to correlate with underlying diseases and to characterize the images. However, in order to understand and characterize the molecular phenotype (to obtain genomic information) of solid heterogeneous tumours, the advanced sequencing of those tissues using biopsy is required. Thus, radiologists image the tissues from various views and angles in order to have the complete image phenotypes, thereby acquiring a huge amount of data. Deriving meaningful details from all these radiological data becomes challenging and raises the big data issues. Therefore, interest in the application of radiomics has been growing in recent years as it has the potential to provide significant interpretive and predictive information for decision support. Radiomics is a combination of conventional computer-aided diagnosis, deep learning methods, and human skills, and thus can be used for quantitative characterization of tumour phenotypes. This paper discusses the overview of radiomics workflow, the results of various radiomics-based studies conducted using various radiological images such as computed tomography (CT), magnetic resonance imaging (MRI), and positron-emission tomography (PET), the challenges we are facing, and the potential contribution of radiomics towards precision medicine. PMID:29308604

  20. Toward high-throughput phenotyping: unbiased automated feature extraction and selection from knowledge sources.

    PubMed

    Yu, Sheng; Liao, Katherine P; Shaw, Stanley Y; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2015-09-01

    Analysis of narrative (text) data from electronic health records (EHRs) can improve population-scale phenotyping for clinical and genetic research. Currently, selection of text features for phenotyping algorithms is slow and laborious, requiring extensive and iterative involvement by domain experts. This paper introduces a method to develop phenotyping algorithms in an unbiased manner by automatically extracting and selecting informative features, which can be comparable to expert-curated ones in classification accuracy. Comprehensive medical concepts were collected from publicly available knowledge sources in an automated, unbiased fashion. Natural language processing (NLP) revealed the occurrence patterns of these concepts in EHR narrative notes, which enabled selection of informative features for phenotype classification. When combined with additional codified features, a penalized logistic regression model was trained to classify the target phenotype. The authors applied our method to develop algorithms to identify patients with rheumatoid arthritis and coronary artery disease cases among those with rheumatoid arthritis from a large multi-institutional EHR. The area under the receiver operating characteristic curves (AUC) for classifying RA and CAD using models trained with automated features were 0.951 and 0.929, respectively, compared to the AUCs of 0.938 and 0.929 by models trained with expert-curated features. Models trained with NLP text features selected through an unbiased, automated procedure achieved comparable or slightly higher accuracy than those trained with expert-curated features. The majority of the selected model features were interpretable. The proposed automated feature extraction method, generating highly accurate phenotyping algorithms with improved efficiency, is a significant step toward high-throughput phenotyping. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Ensemble methods with simple features for document zone classification

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady; Xie, Bingqing

    2012-01-01

    Document layout analysis is of fundamental importance for document image understanding and information retrieval. It requires the identification of blocks extracted from a document image via features extraction and block classification. In this paper, we focus on the classification of the extracted blocks into five classes: text (machine printed), handwriting, graphics, images, and noise. We propose a new set of features for efficient classifications of these blocks. We present a comparative evaluation of three ensemble based classification algorithms (boosting, bagging, and combined model trees) in addition to other known learning algorithms. Experimental results are demonstrated for a set of 36503 zones extracted from 416 document images which were randomly selected from the tobacco legacy document collection. The results obtained verify the robustness and effectiveness of the proposed set of features in comparison to the commonly used Ocropus recognition features. When used in conjunction with the Ocropus feature set, we further improve the performance of the block classification system to obtain a classification accuracy of 99.21%.

  2. Time-reversal imaging for classification of submerged elastic targets via Gibbs sampling and the Relevance Vector Machine.

    PubMed

    Dasgupta, Nilanjan; Carin, Lawrence

    2005-04-01

    Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.

  3. Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks

    PubMed Central

    Xu, Xin; Gui, Rong; Pu, Fangling

    2018-01-01

    Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods. PMID:29510499

  4. Hyperspectral Image Enhancement and Mixture Deep-Learning Classification of Corneal Epithelium Injuries

    PubMed Central

    Md Noor, Siti Salwa; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang

    2017-01-01

    In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability. PMID:29144388

  5. Multi-Pixel Simultaneous Classification of PolSAR Image Using Convolutional Neural Networks.

    PubMed

    Wang, Lei; Xu, Xin; Dong, Hao; Gui, Rong; Pu, Fangling

    2018-03-03

    Convolutional neural networks (CNN) have achieved great success in the optical image processing field. Because of the excellent performance of CNN, more and more methods based on CNN are applied to polarimetric synthetic aperture radar (PolSAR) image classification. Most CNN-based PolSAR image classification methods can only classify one pixel each time. Because all the pixels of a PolSAR image are classified independently, the inherent interrelation of different land covers is ignored. We use a fixed-feature-size CNN (FFS-CNN) to classify all pixels in a patch simultaneously. The proposed method has several advantages. First, FFS-CNN can classify all the pixels in a small patch simultaneously. When classifying a whole PolSAR image, it is faster than common CNNs. Second, FFS-CNN is trained to learn the interrelation of different land covers in a patch, so it can use the interrelation of land covers to improve the classification results. The experiments of FFS-CNN are evaluated on a Chinese Gaofen-3 PolSAR image and other two real PolSAR images. Experiment results show that FFS-CNN is comparable with the state-of-the-art PolSAR image classification methods.

  6. Two-tier tissue decomposition for histopathological image representation and classification.

    PubMed

    Gultekin, Tunc; Koyuncu, Can Fahrettin; Sokmensuer, Cenk; Gunduz-Demir, Cigdem

    2015-01-01

    In digital pathology, devising effective image representations is crucial to design robust automated diagnosis systems. To this end, many studies have proposed to develop object-based representations, instead of directly using image pixels, since a histopathological image may contain a considerable amount of noise typically at the pixel-level. These previous studies mostly employ color information to define their objects, which approximately represent histological tissue components in an image, and then use the spatial distribution of these objects for image representation and classification. Thus, object definition has a direct effect on the way of representing the image, which in turn affects classification accuracies. In this paper, our aim is to design a classification system for histopathological images. Towards this end, we present a new model for effective representation of these images that will be used by the classification system. The contributions of this model are twofold. First, it introduces a new two-tier tissue decomposition method for defining a set of multityped objects in an image. Different than the previous studies, these objects are defined combining texture, shape, and size information and they may correspond to individual histological tissue components as well as local tissue subregions of different characteristics. As its second contribution, it defines a new metric, which we call dominant blob scale, to characterize the shape and size of an object with a single scalar value. Our experiments on colon tissue images reveal that this new object definition and characterization provides distinguishing representation of normal and cancerous histopathological images, which is effective to obtain more accurate classification results compared to its counterparts.

  7. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.

    PubMed

    Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John

    2017-02-01

    The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

  8. Semi-supervised classification tool for DubaiSat-2 multispectral imagery

    NASA Astrophysics Data System (ADS)

    Al-Mansoori, Saeed

    2015-10-01

    This paper addresses a semi-supervised classification tool based on a pixel-based approach of the multi-spectral satellite imagery. There are not many studies demonstrating such algorithm for the multispectral images, especially when the image consists of 4 bands (Red, Green, Blue and Near Infrared) as in DubaiSat-2 satellite images. The proposed approach utilizes both unsupervised and supervised classification schemes sequentially to identify four classes in the image, namely, water bodies, vegetation, land (developed and undeveloped areas) and paved areas (i.e. roads). The unsupervised classification concept is applied to identify two classes; water bodies and vegetation, based on a well-known index that uses the distinct wavelengths of visible and near-infrared sunlight that is absorbed and reflected by the plants to identify the classes; this index parameter is called "Normalized Difference Vegetation Index (NDVI)". Afterward, the supervised classification is performed by selecting training homogenous samples for roads and land areas. Here, a precise selection of training samples plays a vital role in the classification accuracy. Post classification is finally performed to enhance the classification accuracy, where the classified image is sieved, clumped and filtered before producing final output. Overall, the supervised classification approach produced higher accuracy than the unsupervised method. This paper shows some current preliminary research results which point out the effectiveness of the proposed technique in a virtual perspective.

  9. A minimum spanning forest based classification method for dedicated breast CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei, E-mail: bfei@emory.edu

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting modelmore » used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging.« less

  10. A Wavelet Polarization Decomposition Net Model for Polarimetric SAR Image Classification

    NASA Astrophysics Data System (ADS)

    He, Chu; Ou, Dan; Yang, Teng; Wu, Kun; Liao, Mingsheng; Chen, Erxue

    2014-11-01

    In this paper, a deep model based on wavelet texture has been proposed for Polarimetric Synthetic Aperture Radar (PolSAR) image classification inspired by recent successful deep learning method. Our model is supposed to learn powerful and informative representations to improve the generalization ability for the complex scene classification tasks. Given the influence of speckle noise in Polarimetric SAR image, wavelet polarization decomposition is applied first to obtain basic and discriminative texture features which are then embedded into a Deep Neural Network (DNN) in order to compose multi-layer higher representations. We demonstrate that the model can produce a powerful representation which can capture some untraceable information from Polarimetric SAR images and show a promising achievement in comparison with other traditional SAR image classification methods for the SAR image dataset.

  11. A comparative study for chest radiograph image retrieval using binary texture and deep learning classification.

    PubMed

    Anavi, Yaron; Kogan, Ilya; Gelbart, Elad; Geva, Ofer; Greenspan, Hayit

    2015-08-01

    In this work various approaches are investigated for X-ray image retrieval and specifically chest pathology retrieval. Given a query image taken from a data set of 443 images, the objective is to rank images according to similarity. Different features, including binary features, texture features, and deep learning (CNN) features are examined. In addition, two approaches are investigated for the retrieval task. One approach is based on the distance of image descriptors using the above features (hereon termed the "descriptor"-based approach); the second approach ("classification"-based approach) is based on a probability descriptor, generated by a pair-wise classification of each two classes (pathologies) and their decision values using an SVM classifier. Best results are achieved using deep learning features in a classification scheme.

  12. Restoration of Wavelet-Compressed Images and Motion Imagery

    DTIC Science & Technology

    2004-01-01

    SECURITY CLASSIFICATION OF REPORT UNCLASSIFIED 18. SECURITY CLASSIFICATION OF THIS PAGE UNCLASSIFIED 19. SECURITY CLASSIFICATION...images is that they are global translates of each other, where 29 the global motion parameters are known. In a very simple sense , these five images form...Image Proc., vol. 1, Oct. 2001, pp. 185–188. [2] J. W. Woods and T. Naveen, “A filter based bit allocation scheme for subband compresion of HDTV,” IEEE

  13. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    NASA Technical Reports Server (NTRS)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  14. A contour-based shape descriptor for biomedical image classification and retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-12-01

    Contours, object blobs, and specific feature points are utilized to represent object shapes and extract shape descriptors that can then be used for object detection or image classification. In this research we develop a shape descriptor for biomedical image type (or, modality) classification. We adapt a feature extraction method used in optical character recognition (OCR) for character shape representation, and apply various image preprocessing methods to successfully adapt the method to our application. The proposed shape descriptor is applied to radiology images (e.g., MRI, CT, ultrasound, X-ray, etc.) to assess its usefulness for modality classification. In our experiment we compare our method with other visual descriptors such as CEDD, CLD, Tamura, and PHOG that extract color, texture, or shape information from images. The proposed method achieved the highest classification accuracy of 74.1% among all other individual descriptors in the test, and when combined with CSD (color structure descriptor) showed better performance (78.9%) than using the shape descriptor alone.

  15. Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use

    PubMed Central

    Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil

    2013-01-01

    The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648

  16. Multiple Spectral-Spatial Classification Approach for Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2010-01-01

    A .new multiple classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region, with the corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker selection procedure, each of them combining the results of a pixel-wise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification -driven marker and forms a region in the spectral -spatial classification: map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies, when compared to previously proposed classification techniques.

  17. Diverse Application of Magnetic Resonance Imaging for Mouse Phenotyping

    PubMed Central

    Wu, Yijen L.; Lo, Cecilia W.

    2017-01-01

    Small animal models, particularly mouse models, of human diseases are becoming an indispensable tool for biomedical research. Studies in animal models have provided important insights into the etiology of diseases and accelerated the development of therapeutic strategies. Detailed phenotypic characterization is essential, both for the development of such animal models and mechanistic studies into disease pathogenesis and testing the efficacy of experimental therapeutics. Magnetic Resonance Imaging (MRI) is a versatile and non-invasive imaging modality with excellent penetration depth, tissue coverage, and soft tissue contrast. MRI, being a multi-modal imaging modality, together with proven imaging protocols and availability of good contrast agents, is ideally suited for phenotyping mutant mouse models. Here we describe the applications of MRI for phenotyping structural birth defects involving the brain, heart, and kidney in mice. The versatility of MRI and its ease of use are well suited to meet the rapidly increasing demands for mouse phenotyping in the coming age of functional genomics. PMID:28544650

  18. High throughput imaging and analysis for biological interpretation of agricultural plants and environmental interaction

    NASA Astrophysics Data System (ADS)

    Hong, Hyundae; Benac, Jasenka; Riggsbee, Daniel; Koutsky, Keith

    2014-03-01

    High throughput (HT) phenotyping of crops is essential to increase yield in environments deteriorated by climate change. The controlled environment of a greenhouse offers an ideal platform to study the genotype to phenotype linkages for crop screening. Advanced imaging technologies are used to study plants' responses to resource limitations such as water and nutrient deficiency. Advanced imaging technologies coupled with automation make HT phenotyping in the greenhouse not only feasible, but practical. Monsanto has a state of the art automated greenhouse (AGH) facility. Handling of the soil, pots water and nutrients are all completely automated. Images of the plants are acquired by multiple hyperspectral and broadband cameras. The hyperspectral cameras cover wavelengths from visible light through short wave infra-red (SWIR). Inhouse developed software analyzes the images to measure plant morphological and biochemical properties. We measure phenotypic metrics like plant area, height, and width as well as biomass. Hyperspectral imaging allows us to measure biochemcical metrics such as chlorophyll, anthocyanin, and foliar water content. The last 4 years of AGH operations on crops like corn, soybean, and cotton have demonstrated successful application of imaging and analysis technologies for high throughput plant phenotyping. Using HT phenotyping, scientists have been showing strong correlations to environmental conditions, such as water and nutrient deficits, as well as the ability to tease apart distinct differences in the genetic backgrounds of crops.

  19. Computational phenotype discovery using unsupervised feature learning over noisy, sparse, and irregular clinical data.

    PubMed

    Lasko, Thomas A; Denny, Joshua C; Levy, Mia A

    2013-01-01

    Inferring precise phenotypic patterns from population-scale clinical data is a core computational task in the development of precision, personalized medicine. The traditional approach uses supervised learning, in which an expert designates which patterns to look for (by specifying the learning task and the class labels), and where to look for them (by specifying the input variables). While appropriate for individual tasks, this approach scales poorly and misses the patterns that we don't think to look for. Unsupervised feature learning overcomes these limitations by identifying patterns (or features) that collectively form a compact and expressive representation of the source data, with no need for expert input or labeled examples. Its rising popularity is driven by new deep learning methods, which have produced high-profile successes on difficult standardized problems of object recognition in images. Here we introduce its use for phenotype discovery in clinical data. This use is challenging because the largest source of clinical data - Electronic Medical Records - typically contains noisy, sparse, and irregularly timed observations, rendering them poor substrates for deep learning methods. Our approach couples dirty clinical data to deep learning architecture via longitudinal probability densities inferred using Gaussian process regression. From episodic, longitudinal sequences of serum uric acid measurements in 4368 individuals we produced continuous phenotypic features that suggest multiple population subtypes, and that accurately distinguished (0.97 AUC) the uric-acid signatures of gout vs. acute leukemia despite not being optimized for the task. The unsupervised features were as accurate as gold-standard features engineered by an expert with complete knowledge of the domain, the classification task, and the class labels. Our findings demonstrate the potential for achieving computational phenotype discovery at population scale. We expect such data-driven phenotypes to expose unknown disease variants and subtypes and to provide rich targets for genetic association studies.

  20. Computational Phenotype Discovery Using Unsupervised Feature Learning over Noisy, Sparse, and Irregular Clinical Data

    PubMed Central

    Lasko, Thomas A.; Denny, Joshua C.; Levy, Mia A.

    2013-01-01

    Inferring precise phenotypic patterns from population-scale clinical data is a core computational task in the development of precision, personalized medicine. The traditional approach uses supervised learning, in which an expert designates which patterns to look for (by specifying the learning task and the class labels), and where to look for them (by specifying the input variables). While appropriate for individual tasks, this approach scales poorly and misses the patterns that we don’t think to look for. Unsupervised feature learning overcomes these limitations by identifying patterns (or features) that collectively form a compact and expressive representation of the source data, with no need for expert input or labeled examples. Its rising popularity is driven by new deep learning methods, which have produced high-profile successes on difficult standardized problems of object recognition in images. Here we introduce its use for phenotype discovery in clinical data. This use is challenging because the largest source of clinical data – Electronic Medical Records – typically contains noisy, sparse, and irregularly timed observations, rendering them poor substrates for deep learning methods. Our approach couples dirty clinical data to deep learning architecture via longitudinal probability densities inferred using Gaussian process regression. From episodic, longitudinal sequences of serum uric acid measurements in 4368 individuals we produced continuous phenotypic features that suggest multiple population subtypes, and that accurately distinguished (0.97 AUC) the uric-acid signatures of gout vs. acute leukemia despite not being optimized for the task. The unsupervised features were as accurate as gold-standard features engineered by an expert with complete knowledge of the domain, the classification task, and the class labels. Our findings demonstrate the potential for achieving computational phenotype discovery at population scale. We expect such data-driven phenotypes to expose unknown disease variants and subtypes and to provide rich targets for genetic association studies. PMID:23826094

  1. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine

    PubMed Central

    Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng

    2016-01-01

    Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555

  2. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine.

    PubMed

    Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng

    2016-01-01

    Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.

  3. Utilization of a high-throughput shoot imaging system to examine the dynamic phenotypic responses of a C4 cereal crop plant to nitrogen and water deficiency over time

    PubMed Central

    Neilson, E. H.; Edwards, A. M.; Blomstedt, C. K.; Berger, B.; Møller, B. Lindberg; Gleadow, R. M.

    2015-01-01

    The use of high-throughput phenotyping systems and non-destructive imaging is widely regarded as a key technology allowing scientists and breeders to develop crops with the ability to perform well under diverse environmental conditions. However, many of these phenotyping studies have been optimized using the model plant Arabidopsis thaliana. In this study, The Plant Accelerator® at The University of Adelaide, Australia, was used to investigate the growth and phenotypic response of the important cereal crop, Sorghum bicolor L. Moench and related hybrids to water-limited conditions and different levels of fertilizer. Imaging in different spectral ranges was used to monitor plant composition, chlorophyll, and moisture content. Phenotypic image analysis accurately measured plant biomass. The data set obtained enabled the responses of the different sorghum varieties to the experimental treatments to be differentiated and modelled. Plant architectural instead of architecture elements were determined using imaging and found to correlate with an improved tolerance to stress, for example diurnal leaf curling and leaf area index. Analysis of colour images revealed that leaf ‘greenness’ correlated with foliar nitrogen and chlorophyll, while near infrared reflectance (NIR) analysis was a good predictor of water content and leaf thickness, and correlated with plant moisture content. It is shown that imaging sorghum using a high-throughput system can accurately identify and differentiate between growth and specific phenotypic traits. R scripts for robust, parsimonious models are provided to allow other users of phenomic imaging systems to extract useful data readily, and thus relieve a bottleneck in phenotypic screening of multiple genotypes of key crop plants. PMID:25697789

  4. Using texture analysis to improve per-pixel classification of very high resolution images for mapping plastic greenhouses

    NASA Astrophysics Data System (ADS)

    Agüera, Francisco; Aguilar, Fernando J.; Aguilar, Manuel A.

    The area occupied by plastic-covered greenhouses has undergone rapid growth in recent years, currently exceeding 500,000 ha worldwide. Due to the vast amount of input (water, fertilisers, fuel, etc.) required, and output of different agricultural wastes (vegetable, plastic, chemical, etc.), the environmental impact of this type of production system can be serious if not accompanied by sound and sustainable territorial planning. For this, the new generation of satellites which provide very high resolution imagery, such as QuickBird and IKONOS can be useful. In this study, one QuickBird and one IKONOS satellite image have been used to cover the same area under similar circumstances. The aim of this work was an exhaustive comparison of QuickBird vs. IKONOS images in land-cover detection. In terms of plastic greenhouse mapping, comparative tests were designed and implemented, each with separate objectives. Firstly, the Maximum Likelihood Classification (MLC) was applied using five different approaches combining R, G, B, NIR, and panchromatic bands. The combinations of the bands used, significantly influenced some of the indexes used to classify quality in this work. Furthermore, the quality classification of the QuickBird image was higher in all cases than that of the IKONOS image. Secondly, texture features derived from the panchromatic images at different window sizes and with different grey levels were added as a fifth band to the R, G, B, NIR images to carry out the MLC. The inclusion of texture information in the classification did not improve the classification quality. For classifications with texture information, the best accuracies were found in both images for mean and angular second moment texture parameters. The optimum window size in these texture parameters was 3×3 for IK images, while for QB images it depended on the quality index studied, but the optimum window size was around 15×15. With regard to the grey level, the optimum was 128. Thus, the optimum texture parameter depended on the main objective of the image classification. If the main classification goal is to minimize the number of pixels wrongly classified, the mean texture parameter should be used, whereas if the main classification goal is to minimize the unclassified pixels the angular second moment texture parameter should be used. On the whole, both QuickBird and IKONOS images offered promising results in classifying plastic greenhouses.

  5. Precision and Error of Three-dimensional Phenotypic Measures Acquired from 3dMD Photogrammetric Images

    PubMed Central

    Aldridge, Kristina; Boyadjiev, Simeon A.; Capone, George T.; DeLeon, Valerie B.; Richtsmeier, Joan T.

    2015-01-01

    The genetic basis for complex phenotypes is currently of great interest for both clinical investigators and basic scientists. In order to acquire a thorough understanding of the translation from genotype to phenotype, highly precise measures of phenotypic variation are required. New technologies, such as 3D photogrammetry are being implemented in phenotypic studies due to their ability to collect data rapidly and non-invasively. Before these systems can be broadly implemented the error associated with data collected from images acquired using these technologies must be assessed. This study investigates the precision, error, and repeatability associated with anthropometric landmark coordinate data collected from 3D digital photogrammetric images acquired with the 3dMDface System. Precision, error due to the imaging system, error due to digitization of the images, and repeatability are assessed in a sample of children and adults (N=15). Results show that data collected from images with the 3dMDface System are highly repeatable and precise. The average error associated with the placement of landmarks is sub-millimeter; both the error due to digitization and to the imaging system are very low. The few measures showing a higher degree of error include those crossing the labial fissure, which are influenced by even subtle movement of the mandible. These results suggest that 3D anthropometric data collected using the 3dMDface System are highly reliable and therefore useful for evaluation of clinical dysmorphology and surgery, analyses of genotype-phenotype correlations, and inheritance of complex phenotypes. PMID:16158436

  6. Multi-task linear programming discriminant analysis for the identification of progressive MCI individuals.

    PubMed

    Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang

    2014-01-01

    Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.

  7. Multi-Task Linear Programming Discriminant Analysis for the Identification of Progressive MCI Individuals

    PubMed Central

    Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang

    2014-01-01

    Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966

  8. Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering.

    PubMed

    Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen

    2014-04-01

    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.

  9. Cluster Method Analysis of K. S. C. Image

    NASA Technical Reports Server (NTRS)

    Rodriguez, Joe, Jr.; Desai, M.

    1997-01-01

    Information obtained from satellite-based systems has moved to the forefront as a method in the identification of many land cover types. Identification of different land features through remote sensing is an effective tool for regional and global assessment of geometric characteristics. Classification data acquired from remote sensing images have a wide variety of applications. In particular, analysis of remote sensing images have special applications in the classification of various types of vegetation. Results obtained from classification studies of a particular area or region serve towards a greater understanding of what parameters (ecological, temporal, etc.) affect the region being analyzed. In this paper, we make a distinction between both types of classification approaches although, focus is given to the unsupervised classification method using 1987 Thematic Mapped (TM) images of Kennedy Space Center.

  10. Landcover classification in MRF context using Dempster-Shafer fusion for multisensor imagery.

    PubMed

    Sarkar, Anjan; Banerjee, Anjan; Banerjee, Nilanjan; Brahma, Siddhartha; Kartikeyan, B; Chakraborty, Manab; Majumder, K L

    2005-05-01

    This work deals with multisensor data fusion to obtain landcover classification. The role of feature-level fusion using the Dempster-Shafer rule and that of data-level fusion in the MRF context is studied in this paper to obtain an optimally segmented image. Subsequently, segments are validated and classification accuracy for the test data is evaluated. Two examples of data fusion of optical images and a synthetic aperture radar image are presented, each set having been acquired on different dates. Classification accuracies of the technique proposed are compared with those of some recent techniques in literature for the same image data.

  11. Land use/cover classification in the Brazilian Amazon using satellite images.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira

    2012-09-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.

  12. Land use/cover classification in the Brazilian Amazon using satellite images

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant’Anna, Sidnei João Siqueira

    2013-01-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data. PMID:24353353

  13. Using iterative cluster merging with improved gap statistics to perform online phenotype discovery in the context of high-throughput RNAi screens

    PubMed Central

    Yin, Zheng; Zhou, Xiaobo; Bakal, Chris; Li, Fuhai; Sun, Youxian; Perrimon, Norbert; Wong, Stephen TC

    2008-01-01

    Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi) or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM) is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of image-based datasets derived from a wide spectrum of experimental conditions and is suitable to adaptively process new images which are continuously added to existing datasets. Validations were carried out on different dataset, including published RNAi screening using Drosophila embryos [Additional files 1, 2], dataset for cell cycle phase identification using HeLa cells [Additional files 1, 3, 4] and synthetic dataset using polygons, our methods tackled three aforementioned tasks effectively with an accuracy range of 85%–90%. When our method is implemented in the context of a Drosophila genome-scale RNAi image-based screening of cultured cells aimed to identifying the contribution of individual genes towards the regulation of cell-shape, it efficiently discovers meaningful new phenotypes and provides novel biological insight. We also propose a two-step procedure to modify the novelty detection method based on one-class SVM, so that it can be used to online phenotype discovery. In different conditions, we compared the SVM based method with our method using various datasets and our methods consistently outperformed SVM based method in at least two of three tasks by 2% to 5%. These results demonstrate that our methods can be used to better identify novel phenotypes in image-based datasets from a wide range of conditions and organisms. Conclusion We demonstrate that our method can detect various novel phenotypes effectively in complex datasets. Experiment results also validate that our method performs consistently under different order of image input, variation of starting conditions including the number and composition of existing phenotypes, and dataset from different screens. In our findings, the proposed method is suitable for online phenotype discovery in diverse high-throughput image-based genetic and chemical screens. PMID:18534020

  14. Classification by Using Multispectral Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  15. GEOBIA For Land Use Mapping Using Worldview2 Image In Bengkak Village Coastal, Banyuwangi Regency, East Java

    NASA Astrophysics Data System (ADS)

    Alrassi, Fitzastri; Salim, Emil; Nina, Anastasia; Alwi, Luthfi; Danoedoro, Projo; Kamal, Muhammad

    2016-11-01

    The east coast of Banyuwangi regency has a diverse variety of land use such as ponds, mangroves, agricultural fields and settlements. WorldView-2 is a multispectral image with high spatial resolution that can display detailed information of land use. Geographic Object Based Image Analysis (GEOBIA) classification technique uses object segments as the smallest unit of analysis. The segmentation and classification process is not only based on spectral value of the image but also considering other elements of the image interpretation. This gives GEOBIA an opportunities and challenges in the mapping and monitoring of land use. This research aims to assess the GEOBIA classification method for generating the classification of land use in coastal areas of Banyuwangi. The result of this study is land use classification map produced by GEOBIA classification. We verified the accuracy of the resulted land use map by comparing the map with result from visual interpretation of the image that have been validated through field surveys. Variation of land use in most of the east coast of Banyuwangi regency is dominated by mangrove, agricultural fields, mixed farms, settlements and ponds.

  16. Mixed-phenotype acute leukemia: state-of-the-art of the diagnosis, classification and treatment.

    PubMed

    Cernan, Martin; Szotkowski, Tomas; Pikalova, Zuzana

    2017-09-01

    Mixed-phenotype acute leukemia (MPAL) is a heterogeneous group of hematopoietic malignancies in which blasts show markers of multiple developmental lineages and cannot be clearly classified as acute myeloid or lymphoblastic leukemias. Historically, various names and classifications were used for this rare entity accounting for 2-5% of all acute leukemias depending on the diagnostic criterias used. The currently valid classification of myeloid neoplasms and acute leukemia published by the World Health Organization (WHO) in 2016 refers to this group of diseases as MPAL. Because adverse cytogenetic abnormalities are frequently present, MPAL is generally considered a disease with a poor prognosis. Knowledge of its treatment is limited to retrospective analyses of small patient cohorts. So far, no treatment recommendations verified by prospective studies have been published. The reported data suggest that induction therapy for acute lymphoblastic leukemia followed by allogeneic hematopoietic cell transplantation is more effective than induction therapy for acute myeloid leukemia or consolidation chemotherapy. The establishment of cooperative groups and international registries based on the recent WHO criterias are required to ensure further progress in understanding and treatment of MPAL. This review summarizes current knowledge on the diagnosis, classification, prognosis and treatment of MPAL patients.

  17. Toward a Reasoned Classification of Diseases Using Physico-Chemical Based Phenotypes

    PubMed Central

    Schwartz, Laurent; Lafitte, Olivier; da Veiga Moreira, Jorgelindo

    2018-01-01

    Background: Diseases and health conditions have been classified according to anatomical site, etiological, and clinical criteria. Physico-chemical mechanisms underlying the biology of diseases, such as the flow of energy through cells and tissues, have been often overlooked in classification systems. Objective: We propose a conceptual framework toward the development of an energy-oriented classification of diseases, based on the principles of physical chemistry. Methods: A review of literature on the physical chemistry of biological interactions in a number of diseases is traced from the point of view of the fluid and solid mechanics, electricity, and chemistry. Results: We found consistent evidence in literature of decreased and/or increased physical and chemical forces intertwined with biological processes of numerous diseases, which allowed the identification of mechanical, electric and chemical phenotypes of diseases. Discussion: Biological mechanisms of diseases need to be evaluated and integrated into more comprehensive theories that should account with principles of physics and chemistry. A hypothetical model is proposed relating the natural history of diseases to mechanical stress, electric field, and chemical equilibria (ATP) changes. The present perspective toward an innovative disease classification may improve drug-repurposing strategies in the future. PMID:29541031

  18. SDL: Saliency-Based Dictionary Learning Framework for Image Similarity.

    PubMed

    Sarkar, Rituparna; Acton, Scott T

    2018-02-01

    In image classification, obtaining adequate data to learn a robust classifier has often proven to be difficult in several scenarios. Classification of histological tissue images for health care analysis is a notable application in this context due to the necessity of surgery, biopsy or autopsy. To adequately exploit limited training data in classification, we propose a saliency guided dictionary learning method and subsequently an image similarity technique for histo-pathological image classification. Salient object detection from images aids in the identification of discriminative image features. We leverage the saliency values for the local image regions to learn a dictionary and respective sparse codes for an image, such that the more salient features are reconstructed with smaller error. The dictionary learned from an image gives a compact representation of the image itself and is capable of representing images with similar content, with comparable sparse codes. We employ this idea to design a similarity measure between a pair of images, where local image features of one image, are encoded with the dictionary learned from the other and vice versa. To effectively utilize the learned dictionary, we take into account the contribution of each dictionary atom in the sparse codes to generate a global image representation for image comparison. The efficacy of the proposed method was evaluated using three tissue data sets that consist of mammalian kidney, lung and spleen tissue, breast cancer, and colon cancer tissue images. From the experiments, we observe that our methods outperform the state of the art with an increase of 14.2% in the average classification accuracy over all data sets.

  19. Comparison of wheat classification accuracy using different classifiers of the image-100 system

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.

    1981-01-01

    Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.

  20. Expert Reliability for the World Health Organization Standardized Ultrasound Classification of Cystic Echinococcosis

    PubMed Central

    Solomon, Nadia; Fields, Paul J.; Tamarozzi, Francesca; Brunetti, Enrico; Macpherson, Calum N. L.

    2017-01-01

    Cystic echinococcosis (CE), a parasitic zoonosis, results in cyst formation in the viscera. Cyst morphology depends on developmental stage. In 2003, the World Health Organization (WHO) published a standardized ultrasound (US) classification for CE, for use among experts as a standard of comparison. This study examined the reliability of this classification. Eleven international CE and US experts completed an assessment of eight WHO classification images and 88 test images representing cyst stages. Inter- and intraobserver reliability and observer performance were assessed using Fleiss' and Cohen's kappa. Interobserver reliability was moderate for WHO images (κ = 0.600, P < 0.0001) and substantial for test images (κ = 0.644, P < 0.0001), with substantial to almost perfect interobserver reliability for stages with pathognomonic signs (CE1, CE2, and CE3) for WHO (0.618 < κ < 0.904) and test images (0.642 < κ < 0.768). Comparisons of expert performances against the majority classification for each image were significant for WHO (0.413 < κ < 1.000, P < 0.005) and test images (0.718 < κ < 0.905, P < 0.0001); and intraobserver reliability was significant for WHO (0.520 < κ < 1.000, P < 0.005) and test images (0.690 < κ < 0.896, P < 0.0001). Findings demonstrate moderate to substantial interobserver and substantial to almost perfect intraobserver reliability for the WHO classification, with substantial to almost perfect interobserver reliability for pathognomonic stages. This confirms experts' abilities to reliably identify WHO-defined pathognomonic signs of CE, demonstrating that the WHO classification provides a reproducible way of staging CE. PMID:28070008

  1. Ice/water Classification of Sentinel-1 Images

    NASA Astrophysics Data System (ADS)

    Korosov, Anton; Zakhvatkina, Natalia; Muckenhuber, Stefan

    2015-04-01

    Sea Ice monitoring and classification relies heavily on synthetic aperture radar (SAR) imagery. These sensors record data either only at horizontal polarization (RADARSAT-1) or vertically polarized (ERS-1 and ERS-2) or at dual polarization (Radarsat-2, Sentinel-1). Many algorithms have been developed to discriminate sea ice types and open water using single polarization images. Ice type classification, however, is still ambiguous in some cases. Sea ice classification in single polarization SAR images has been attempted using various methods since the beginning of the ERS programme. The robust classification using only SAR images that can provide useful results under varying sea ice types and open water tend to be not generally applicable in operational regime. The new generation SAR satellites have capability to deliver images in several polarizations. This gives improved possibility to develop sea ice classification algorithms. In this study we use data from Sentinel-1 at dual-polarization, i.e. HH (horizontally transmitted and horizontally received) and HV (horizontally transmitted, vertically received). This mode assembles wide SAR image from several narrower SAR beams, resulting to an image of 500 x 500 km with 50 m resolution. A non-linear scheme for classification of Sentinel-1 data has been developed. The processing allows to identify three classes: ice, calm water and rough water at 1 km spatial resolution. The raw sigma0 data in HH and HV polarization are first corrected for thermal and random noise by extracting the background thermal noise level and smoothing the image with several filters. At the next step texture characteristics are computed in a moving window using a Gray Level Co-occurence Matrix (GLCM). A neural network is applied at the last step for processing array of the most informative texture characteristics and ice/water classification. The main results are: * the most informative texture characteristics to be used for sea ice classification were revealed; * the best set of parameters including the window size, number of levels of quantization of sigma0 values and co-occurence distance was found; * a support vector machine (SVM) was trained on results of visual classification of 30 Sentinel-1 images. Despite the general high accuracy of the neural network (95% of true positive classification) problems with classification of young newly formed ice and rough water arise due to the similar average backscatter and texture. Other methods of smoothing and computation of texture characteristics (e.g. computation of GLCM from a variable size window) is assessed. The developed scheme will be utilized in NRT processing of Sentinel-1 data at NERSC within the MyOcean2 project.

  2. Classification images for localization performance in ramp-spectrum noise.

    PubMed

    Abbey, Craig K; Samuelson, Frank W; Zeng, Rongping; Boone, John M; Eckstein, Miguel P; Myers, Kyle

    2018-05-01

    This study investigates forced localization of targets in simulated images with statistical properties similar to trans-axial sections of x-ray computed tomography (CT) volumes. A total of 24 imaging conditions are considered, comprising two target sizes, three levels of background variability, and four levels of frequency apodization. The goal of the study is to better understand how human observers perform forced-localization tasks in images with CT-like statistical properties. The transfer properties of CT systems are modeled by a shift-invariant transfer function in addition to apodization filters that modulate high spatial frequencies. The images contain noise that is the combination of a ramp-spectrum component, simulating the effect of acquisition noise in CT, and a power-law component, simulating the effect of normal anatomy in the background, which are modulated by the apodization filter as well. Observer performance is characterized using two psychophysical techniques: efficiency analysis and classification image analysis. Observer efficiency quantifies how much diagnostic information is being used by observers to perform a task, and classification images show how that information is being accessed in the form of a perceptual filter. Psychophysical studies from five subjects form the basis of the results. Observer efficiency ranges from 29% to 77% across the different conditions. The lowest efficiency is observed in conditions with uniform backgrounds, where significant effects of apodization are found. The classification images, estimated using smoothing windows, suggest that human observers use center-surround filters to perform the task, and these are subjected to a number of subsequent analyses. When implemented as a scanning linear filter, the classification images appear to capture most of the observer variability in efficiency (r 2 = 0.86). The frequency spectra of the classification images show that frequency weights generally appear bandpass in nature, with peak frequency and bandwidth that vary with statistical properties of the images. In these experiments, the classification images appear to capture important features of human-observer performance. Frequency apodization only appears to have a significant effect on performance in the absence of anatomical variability, where the observers appear to underweight low spatial frequencies that have relatively little noise. Frequency weights derived from the classification images generally have a bandpass structure, with adaptation to different conditions seen in the peak frequency and bandwidth. The classification image spectra show relatively modest changes in response to different levels of apodization, with some evidence that observers are attempting to rebalance the apodized spectrum presented to them. © 2018 American Association of Physicists in Medicine.

  3. An automatic agricultural zone classification procedure for crop inventory satellite images

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Kux, H. J.; Velasco, F. R. D.; Deoliveira, M. O. B.

    1982-01-01

    A classification procedure for assessing crop areal proportion in multispectral scanner image is discussed. The procedure is into four parts: labeling; classification; proportion estimation; and evaluation. The procedure also has the following characteristics: multitemporal classification; the need for a minimum field information; and verification capability between automatic classification and analyst labeling. The processing steps and the main algorithms involved are discussed. An outlook on the future of this technology is also presented.

  4. Integrating image processing and classification technology into automated polarizing film defect inspection

    NASA Astrophysics Data System (ADS)

    Kuo, Chung-Feng Jeffrey; Lai, Chun-Yu; Kao, Chih-Hsiang; Chiu, Chin-Hsun

    2018-05-01

    In order to improve the current manual inspection and classification process for polarizing film on production lines, this study proposes a high precision automated inspection and classification system for polarizing film, which is used for recognition and classification of four common defects: dent, foreign material, bright spot, and scratch. First, the median filter is used to remove the impulse noise in the defect image of polarizing film. The random noise in the background is smoothed by the improved anisotropic diffusion, while the edge detail of the defect region is sharpened. Next, the defect image is transformed by Fourier transform to the frequency domain, combined with a Butterworth high pass filter to sharpen the edge detail of the defect region, and brought back by inverse Fourier transform to the spatial domain to complete the image enhancement process. For image segmentation, the edge of the defect region is found by Canny edge detector, and then the complete defect region is obtained by two-stage morphology processing. For defect classification, the feature values, including maximum gray level, eccentricity, the contrast, and homogeneity of gray level co-occurrence matrix (GLCM) extracted from the images, are used as the input of the radial basis function neural network (RBFNN) and back-propagation neural network (BPNN) classifier, 96 defect images are then used as training samples, and 84 defect images are used as testing samples to validate the classification effect. The result shows that the classification accuracy by using RBFNN is 98.9%. Thus, our proposed system can be used by manufacturing companies for a higher yield rate and lower cost. The processing time of one single image is 2.57 seconds, thus meeting the practical application requirement of an industrial production line.

  5. Collaborative classification of hyperspectral and visible images with convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Mengmeng; Li, Wei; Du, Qian

    2017-10-01

    Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.

  6. Using genetically modified tomato crop plants with purple leaves for absolute weed/crop classification.

    PubMed

    Lati, Ran N; Filin, Sagi; Aly, Radi; Lande, Tal; Levin, Ilan; Eizenberg, Hanan

    2014-07-01

    Weed/crop classification is considered the main problem in developing precise weed-management methodologies, because both crops and weeds share similar hues. Great effort has been invested in the development of classification models, most based on expensive sensors and complicated algorithms. However, satisfactory results are not consistently obtained due to imaging conditions in the field. We report on an innovative approach that combines advances in genetic engineering and robust image-processing methods to detect weeds and distinguish them from crop plants by manipulating the crop's leaf color. We demonstrate this on genetically modified tomato (germplasm AN-113) which expresses a purple leaf color. An autonomous weed/crop classification is performed using an invariant-hue transformation that is applied to images acquired by a standard consumer camera (visible wavelength) and handles variations in illumination intensities. The integration of these methodologies is simple and effective, and classification results were accurate and stable under a wide range of imaging conditions. Using this approach, we simplify the most complicated stage in image-based weed/crop classification models. © 2013 Society of Chemical Industry.

  7. Evaluation of Multiple Kernel Learning Algorithms for Crop Mapping Using Satellite Image Time-Series Data

    NASA Astrophysics Data System (ADS)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2017-09-01

    Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.

  8. Classification of MR brain images by combination of multi-CNNs for AD diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Danni; Liu, Manhua; Fu, Jianliang; Wang, Yaping

    2017-07-01

    Alzheimer's disease (AD) is an irreversible neurodegenerative disorder with progressive impairment of memory and cognitive functions. Its early diagnosis is crucial for development of future treatment. Magnetic resonance images (MRI) play important role to help understand the brain anatomical changes related to AD. Conventional methods extract the hand-crafted features such as gray matter volumes and cortical thickness and train a classifier to distinguish AD from other groups. Different from these methods, this paper proposes to construct multiple deep 3D convolutional neural networks (3D-CNNs) to learn the various features from local brain images which are combined to make the final classification for AD diagnosis. First, a number of local image patches are extracted from the whole brain image and a 3D-CNN is built upon each local patch to transform the local image into more compact high-level features. Then, the upper convolution and fully connected layers are fine-tuned to combine the multiple 3D-CNNs for image classification. The proposed method can automatically learn the generic features from imaging data for classification. Our method is evaluated using T1-weighted structural MR brain images on 428 subjects including 199 AD patients and 229 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 87.15% and an AUC (area under the ROC curve) of 92.26% for AD classification, demonstrating the promising classification performances.

  9. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    NASA Technical Reports Server (NTRS)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  10. Ensemble Sparse Classification of Alzheimer’s Disease

    PubMed Central

    Liu, Manhua; Zhang, Daoqiang; Shen, Dinggang

    2012-01-01

    The high-dimensional pattern classification methods, e.g., support vector machines (SVM), have been widely investigated for analysis of structural and functional brain images (such as magnetic resonance imaging (MRI)) to assist the diagnosis of Alzheimer’s disease (AD) including its prodromal stage, i.e., mild cognitive impairment (MCI). Most existing classification methods extract features from neuroimaging data and then construct a single classifier to perform classification. However, due to noise and small sample size of neuroimaging data, it is challenging to train only a global classifier that can be robust enough to achieve good classification performance. In this paper, instead of building a single global classifier, we propose a local patch-based subspace ensemble method which builds multiple individual classifiers based on different subsets of local patches and then combines them for more accurate and robust classification. Specifically, to capture the local spatial consistency, each brain image is partitioned into a number of local patches and a subset of patches is randomly selected from the patch pool to build a weak classifier. Here, the sparse representation-based classification (SRC) method, which has shown effective for classification of image data (e.g., face), is used to construct each weak classifier. Then, multiple weak classifiers are combined to make the final decision. We evaluate our method on 652 subjects (including 198 AD patients, 225 MCI and 229 normal controls) from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using MR images. The experimental results show that our method achieves an accuracy of 90.8% and an area under the ROC curve (AUC) of 94.86% for AD classification and an accuracy of 87.85% and an AUC of 92.90% for MCI classification, respectively, demonstrating a very promising performance of our method compared with the state-of-the-art methods for AD/MCI classification using MR images. PMID:22270352

  11. Neuroanatomical phenotyping of the mouse brain with three-dimensional autofluorescence imaging

    PubMed Central

    Wong, Michael D.; Dazai, Jun; Altaf, Maliha; Mark Henkelman, R.; Lerch, Jason P.; Nieman, Brian J.

    2012-01-01

    The structural organization of the brain is important for normal brain function and is critical to understand in order to evaluate changes that occur during disease processes. Three-dimensional (3D) imaging of the mouse brain is necessary to appreciate the spatial context of structures within the brain. In addition, the small scale of many brain structures necessitates resolution at the ∼10 μm scale. 3D optical imaging techniques, such as optical projection tomography (OPT), have the ability to image intact large specimens (1 cm3) with ∼5 μm resolution. In this work we assessed the potential of autofluorescence optical imaging methods, and specifically OPT, for phenotyping the mouse brain. We found that both specimen size and fixation methods affected the quality of the OPT image. Based on these findings we developed a specimen preparation method to improve the images. Using this method we assessed the potential of optical imaging for phenotyping. Phenotypic differences between wild-type male and female mice were quantified using computer-automated methods. We found that optical imaging of the endogenous autofluorescence in the mouse brain allows for 3D characterization of neuroanatomy and detailed analysis of brain phenotypes. This will be a powerful tool for understanding mouse models of disease and development and is a technology that fits easily within the workflow of biology and neuroscience labs. PMID:22718750

  12. Plant species classification using flower images—A comparative study of local feature representations

    PubMed Central

    Seeland, Marco; Rzanny, Michael; Alaqraa, Nedal; Wäldchen, Jana; Mäder, Patrick

    2017-01-01

    Steady improvements of image description methods induced a growing interest in image-based plant species classification, a task vital to the study of biodiversity and ecological sensitivity. Various techniques have been proposed for general object classification over the past years and several of them have already been studied for plant species classification. However, results of these studies are selective in the evaluated steps of a classification pipeline, in the utilized datasets for evaluation, and in the compared baseline methods. No study is available that evaluates the main competing methods for building an image representation on the same datasets allowing for generalized findings regarding flower-based plant species classification. The aim of this paper is to comparatively evaluate methods, method combinations, and their parameters towards classification accuracy. The investigated methods span from detection, extraction, fusion, pooling, to encoding of local features for quantifying shape and color information of flower images. We selected the flower image datasets Oxford Flower 17 and Oxford Flower 102 as well as our own Jena Flower 30 dataset for our experiments. Findings show large differences among the various studied techniques and that their wisely chosen orchestration allows for high accuracies in species classification. We further found that true local feature detectors in combination with advanced encoding methods yield higher classification results at lower computational costs compared to commonly used dense sampling and spatial pooling methods. Color was found to be an indispensable feature for high classification results, especially while preserving spatial correspondence to gray-level features. In result, our study provides a comprehensive overview of competing techniques and the implications of their main parameters for flower-based plant species classification. PMID:28234999

  13. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  14. An automated field phenotyping pipeline for application in grapevine research.

    PubMed

    Kicherer, Anna; Herzog, Katja; Pflanz, Michael; Wieland, Markus; Rüger, Philipp; Kecke, Steffen; Kuhlmann, Heiner; Töpfer, Reinhard

    2015-02-26

    Due to its perennial nature and size, the acquisition of phenotypic data in grapevine research is almost exclusively restricted to the field and done by visual estimation. This kind of evaluation procedure is limited by time, cost and the subjectivity of records. As a consequence, objectivity, automation and more precision of phenotypic data evaluation are needed to increase the number of samples, manage grapevine repositories, enable genetic research of new phenotypic traits and, therefore, increase the efficiency in plant research. In the present study, an automated field phenotyping pipeline was setup and applied in a plot of genetic resources. The application of the PHENObot allows image acquisition from at least 250 individual grapevines per hour directly in the field without user interaction. Data management is handled by a database (IMAGEdata). The automatic image analysis tool BIVcolor (Berries in Vineyards-color) permitted the collection of precise phenotypic data of two important fruit traits, berry size and color, within a large set of plants. The application of the PHENObot represents an automated tool for high-throughput sampling of image data in the field. The automated analysis of these images facilitates the generation of objective and precise phenotypic data on a larger scale.

  15. If the skull fits: magnetic resonance imaging and microcomputed tomography for combined analysis of brain and skull phenotypes in the mouse

    PubMed Central

    Blank, Marissa C.; Roman, Brian B.; Henkelman, R. Mark; Millen, Kathleen J.

    2012-01-01

    The mammalian brain and skull develop concurrently in a coordinated manner, consistently producing a brain and skull that fit tightly together. It is common that abnormalities in one are associated with related abnormalities in the other. However, this is not always the case. A complete characterization of the relationship between brain and skull phenotypes is necessary to understand the mechanisms that cause them to be coordinated or divergent and to provide perspective on the potential diagnostic or prognostic significance of brain and skull phenotypes. We demonstrate the combined use of magnetic resonance imaging and microcomputed tomography for analysis of brain and skull phenotypes in the mouse. Co-registration of brain and skull images allows comparison of the relationship between phenotypes in the brain and those in the skull. We observe a close fit between the brain and skull of two genetic mouse models that both show abnormal brain and skull phenotypes. Application of these three-dimensional image analyses in a broader range of mouse mutants will provide a map of the relationships between brain and skull phenotypes generally and allow characterization of patterns of similarities and differences. PMID:22947655

  16. An Automated Field Phenotyping Pipeline for Application in Grapevine Research

    PubMed Central

    Kicherer, Anna; Herzog, Katja; Pflanz, Michael; Wieland, Markus; Rüger, Philipp; Kecke, Steffen; Kuhlmann, Heiner; Töpfer, Reinhard

    2015-01-01

    Due to its perennial nature and size, the acquisition of phenotypic data in grapevine research is almost exclusively restricted to the field and done by visual estimation. This kind of evaluation procedure is limited by time, cost and the subjectivity of records. As a consequence, objectivity, automation and more precision of phenotypic data evaluation are needed to increase the number of samples, manage grapevine repositories, enable genetic research of new phenotypic traits and, therefore, increase the efficiency in plant research. In the present study, an automated field phenotyping pipeline was setup and applied in a plot of genetic resources. The application of the PHENObot allows image acquisition from at least 250 individual grapevines per hour directly in the field without user interaction. Data management is handled by a database (IMAGEdata). The automatic image analysis tool BIVcolor (Berries in Vineyards-color) permitted the collection of precise phenotypic data of two important fruit traits, berry size and color, within a large set of plants. The application of the PHENObot represents an automated tool for high-throughput sampling of image data in the field. The automated analysis of these images facilitates the generation of objective and precise phenotypic data on a larger scale. PMID:25730485

  17. Image patch-based method for automated classification and detection of focal liver lesions on CT

    NASA Astrophysics Data System (ADS)

    Safdari, Mustafa; Pasari, Raghav; Rubin, Daniel; Greenspan, Hayit

    2013-03-01

    We developed a method for automated classification and detection of liver lesions in CT images based on image patch representation and bag-of-visual-words (BoVW). BoVW analysis has been extensively used in the computer vision domain to analyze scenery images. In the current work we discuss how it can be used for liver lesion classification and detection. The methodology includes building a dictionary for a training set using local descriptors and representing a region in the image using a visual word histogram. Two tasks are described: a classification task, for lesion characterization, and a detection task in which a scan window moves across the image and is determined to be normal liver tissue or a lesion. Data: In the classification task 73 CT images of liver lesions were used, 25 images having cysts, 24 having metastasis and 24 having hemangiomas. A radiologist circumscribed the lesions, creating a region of interest (ROI), in each of the images. He then provided the diagnosis, which was established either by biopsy or clinical follow-up. Thus our data set comprises 73 images and 73 ROIs. In the detection task, a radiologist drew ROIs around each liver lesion and two regions of normal liver, for a total of 159 liver lesion ROIs and 146 normal liver ROIs. The radiologist also demarcated the liver boundary. Results: Classification results of more than 95% were obtained. In the detection task, F1 results obtained is 0.76. Recall is 84%, with precision of 73%. Results show the ability to detect lesions, regardless of shape.

  18. The Research on Dryland Crop Classification Based on the Fusion of SENTINEL-1A SAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; He, J.; Wen, Q.; Yu, F.; Gu, X.; Wang, Z.

    2018-04-01

    In recent years, the quick upgrading and improvement of SAR sensors provide beneficial complements for the traditional optical remote sensing in the aspects of theory, technology and data. In this paper, Sentinel-1A SAR data and GF-1 optical data were selected for image fusion, and more emphases were put on the dryland crop classification under a complex crop planting structure, regarding corn and cotton as the research objects. Considering the differences among various data fusion methods, the principal component analysis (PCA), Gram-Schmidt (GS), Brovey and wavelet transform (WT) methods were compared with each other, and the GS and Brovey methods were proved to be more applicable in the study area. Then, the classification was conducted based on the object-oriented technique process. And for the GS, Brovey fusion images and GF-1 optical image, the nearest neighbour algorithm was adopted to realize the supervised classification with the same training samples. Based on the sample plots in the study area, the accuracy assessment was conducted subsequently. The values of overall accuracy and kappa coefficient of fusion images were all higher than those of GF-1 optical image, and GS method performed better than Brovey method. In particular, the overall accuracy of GS fusion image was 79.8 %, and the Kappa coefficient was 0.644. Thus, the results showed that GS and Brovey fusion images were superior to optical images for dryland crop classification. This study suggests that the fusion of SAR and optical images is reliable for dryland crop classification under a complex crop planting structure.

  19. A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading

    NASA Astrophysics Data System (ADS)

    Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.

    2018-05-01

    Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.

  20. Testing random forest classification for identifying lava flows and mapping age groups on a single Landsat 8 image

    NASA Astrophysics Data System (ADS)

    Li, Long; Solana, Carmen; Canters, Frank; Kervyn, Matthieu

    2017-10-01

    Mapping lava flows using satellite images is an important application of remote sensing in volcanology. Several volcanoes have been mapped through remote sensing using a wide range of data, from optical to thermal infrared and radar images, using techniques such as manual mapping, supervised/unsupervised classification, and elevation subtraction. So far, spectral-based mapping applications mainly focus on the use of traditional pixel-based classifiers, without much investigation into the added value of object-based approaches and into advantages of using machine learning algorithms. In this study, Nyamuragira, characterized by a series of > 20 overlapping lava flows erupted over the last century, was used as a case study. The random forest classifier was tested to map lava flows based on pixels and objects. Image classification was conducted for the 20 individual flows and for 8 groups of flows of similar age using a Landsat 8 image and a DEM of the volcano, both at 30-meter spatial resolution. Results show that object-based classification produces maps with continuous and homogeneous lava surfaces, in agreement with the physical characteristics of lava flows, while lava flows mapped through the pixel-based classification are heterogeneous and fragmented including much "salt and pepper noise". In terms of accuracy, both pixel-based and object-based classification performs well but the former results in higher accuracies than the latter except for mapping lava flow age groups without using topographic features. It is concluded that despite spectral similarity, lava flows of contrasting age can be well discriminated and mapped by means of image classification. The classification approach demonstrated in this study only requires easily accessible image data and can be applied to other volcanoes as well if there is sufficient information to calibrate the mapping.

  1. Hybrid Optimization of Object-Based Classification in High-Resolution Images Using Continous ANT Colony Algorithm with Emphasis on Building Detection

    NASA Astrophysics Data System (ADS)

    Tamimi, E.; Ebadi, H.; Kiani, A.

    2017-09-01

    Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.

  2. Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques

    PubMed Central

    Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M.; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V.; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L.; Bilello, Michel; O'Rourke, Donald M.; Davatzikos, Christos

    2016-01-01

    Background MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). Methods One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Results Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. Conclusions By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood–brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. PMID:26188015

  3. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    PubMed

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Impact of atmospheric correction and image filtering on hyperspectral classification of tree species using support vector machine

    NASA Astrophysics Data System (ADS)

    Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko

    2015-01-01

    Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.

  5. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  6. Computer-aided classification of optical images for diagnosis of osteoarthritis in the finger joints.

    PubMed

    Zhang, Jiang; Wang, James Z; Yuan, Zhen; Sobel, Eric S; Jiang, Huabei

    2011-01-01

    This study presents a computer-aided classification method to distinguish osteoarthritis finger joints from healthy ones based on the functional images captured by x-ray guided diffuse optical tomography. Three imaging features, joint space width, optical absorption, and scattering coefficients, are employed to train a Least Squares Support Vector Machine (LS-SVM) classifier for osteoarthritis classification. The 10-fold validation results show that all osteoarthritis joints are clearly identified and all healthy joints are ruled out by the LS-SVM classifier. The best sensitivity, specificity, and overall accuracy of the classification by experienced technicians based on manual calculation of optical properties and visual examination of optical images are only 85%, 93%, and 90%, respectively. Therefore, our LS-SVM based computer-aided classification is a considerably improved method for osteoarthritis diagnosis.

  7. Fish Karyome: A karyological information network database of Indian Fishes.

    PubMed

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra

    2012-01-01

    'Fish Karyome', a database on karyological information of Indian fishes have been developed that serves as central source for karyotype data about Indian fishes compiled from the published literature. Fish Karyome has been intended to serve as a liaison tool for the researchers and contains karyological information about 171 out of 2438 finfish species reported in India and is publically available via World Wide Web. The database provides information on chromosome number, morphology, sex chromosomes, karyotype formula and cytogenetic markers etc. Additionally, it also provides the phenotypic information that includes species name, its classification, and locality of sample collection, common name, local name, sex, geographical distribution, and IUCN Red list status. Besides, fish and karyotype images, references for 171 finfish species have been included in the database. Fish Karyome has been developed using SQL Server 2008, a relational database management system, Microsoft's ASP.NET-2008 and Macromedia's FLASH Technology under Windows 7 operating environment. The system also enables users to input new information and images into the database, search and view the information and images of interest using various search options. Fish Karyome has wide range of applications in species characterization and identification, sex determination, chromosomal mapping, karyo-evolution and systematics of fishes.

  8. Immunophenotype Discovery, Hierarchical Organization, and Template-Based Classification of Flow Cytometry Samples

    DOE PAGES

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-08-31

    We describe algorithms for discovering immunophenotypes from large collections of flow cytometry samples and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations’ characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters), a template consists of generic meta-populations (a group ofmore » homogeneous cell populations obtained from the samples in a class) that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples while ignoring noise and small sample-specific variations. We have applied the template-based scheme to analyze several datasets, including one representing a healthy immune system and one of acute myeloid leukemia (AML) samples. The last task is challenging due to the phenotypic heterogeneity of the several subtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML and were able to distinguish acute promyelocytic leukemia (APL) samples with the markers provided. Clinically, this is helpful since APL has a different treatment regimen from other subtypes of AML. Core algorithms used in our data analysis are available in the flowMatch package at www.bioconductor.org. It has been downloaded nearly 6,000 times since 2014.« less

  9. Immunophenotype Discovery, Hierarchical Organization, and Template-Based Classification of Flow Cytometry Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    We describe algorithms for discovering immunophenotypes from large collections of flow cytometry samples and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations’ characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters), a template consists of generic meta-populations (a group ofmore » homogeneous cell populations obtained from the samples in a class) that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples while ignoring noise and small sample-specific variations. We have applied the template-based scheme to analyze several datasets, including one representing a healthy immune system and one of acute myeloid leukemia (AML) samples. The last task is challenging due to the phenotypic heterogeneity of the several subtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML and were able to distinguish acute promyelocytic leukemia (APL) samples with the markers provided. Clinically, this is helpful since APL has a different treatment regimen from other subtypes of AML. Core algorithms used in our data analysis are available in the flowMatch package at www.bioconductor.org. It has been downloaded nearly 6,000 times since 2014.« less

  10. Superpixel-based spectral classification for the detection of head and neck cancer with hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Chung, Hyunkoo; Lu, Guolan; Tian, Zhiqiang; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei

    2016-03-01

    Hyperspectral imaging (HSI) is an emerging imaging modality for medical applications. HSI acquires two dimensional images at various wavelengths. The combination of both spectral and spatial information provides quantitative information for cancer detection and diagnosis. This paper proposes using superpixels, principal component analysis (PCA), and support vector machine (SVM) to distinguish regions of tumor from healthy tissue. The classification method uses 2 principal components decomposed from hyperspectral images and obtains an average sensitivity of 93% and an average specificity of 85% for 11 mice. The hyperspectral imaging technology and classification method can have various applications in cancer research and management.

  11. Plus disease in retinopathy of prematurity: a continuous spectrum of vascular abnormality as basis of diagnostic variability

    PubMed Central

    Campbell, J. Peter; Kalpathy-Cramer, Jayashree; Erdogmus, Deniz; Tian, Peng; Kedarisetti, Dharanish; Moleta, Chace; Reynolds, James D.; Hutcheson, Kelly; Shapiro, Michael J.; Repka, Michael X.; Ferrone, Philip; Drenser, Kimberly; Horowitz, Jason; Sonmez, Kemal; Swan, Ryan; Ostmo, Susan; Jonas, Karyn E.; Chan, R.V. Paul; Chiang, Michael F.

    2016-01-01

    Objective To identify patterns of inter-expert discrepancy in plus disease diagnosis in retinopathy of prematurity (ROP). Design We developed two datasets of clinical images of varying disease severity (100 images and 34 images) as part of the Imaging and Informatics in ROP study, and determined a consensus reference standard diagnosis (RSD) for each image, based on 3 independent image graders and the clinical exam. We recruited 8 expert ROP clinicians to classify these images and compared the distribution of classifications between experts and the RSD. Subjects, Participants, and/or Controls Images obtained during routine ROP screening in neonatal intensive care units. 8 participating experts with >10 years of clinical ROP experience and >5 peer-reviewed ROP publications. Methods, Intervention, or Testing Expert classification of images of plus disease in ROP. Main Outcome Measures Inter-expert agreement (weighted kappa statistic), and agreement and bias on ordinal classification between experts (ANOVA) and the RSD (percent agreement). Results There was variable inter-expert agreement on diagnostic classifications between the 8 experts and the RSD (weighted kappa 0 – 0.75, mean 0.30). RSD agreement ranged from 80 – 94% agreement for the dataset of 100 images, and 29 – 79% for the dataset of 34 images. However, when images were ranked in order of disease severity (by average expert classification), the pattern of expert classification revealed a consistent systematic bias for each expert consistent with unique cut points for the diagnosis of plus disease and pre-plus disease. The two-way ANOVA model suggested a highly significant effect of both image and user on the average score (P<0.05, adjusted R2=0.82 for dataset A, and P< 0.05 and adjusted R2 =0.6615 for dataset B). Conclusions and Relevance There is wide variability in the classification of plus disease by ROP experts, which occurs because experts have different “cut-points” for the amounts of vascular abnormality required for presence of plus and pre-plus disease. This has important implications for research, teaching and patient care for ROP, and suggests that a continuous ROP plus disease severity score may more accurately reflect the behavior of expert ROP clinicians, and may better standardize classification in the future. PMID:27591053

  12. Efficiency of the spectral-spatial classification of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Borzov, S. M.; Potaturkin, O. I.

    2017-01-01

    The efficiency of methods of the spectral-spatial classification of similarly looking types of vegetation on the basis of hyperspectral data of remote sensing of the Earth, which take into account local neighborhoods of analyzed image pixels, is experimentally studied. Algorithms that involve spatial pre-processing of the raw data and post-processing of pixel-based spectral classification maps are considered. Results obtained both for a large-size hyperspectral image and for its test fragment with different methods of training set construction are reported. The classification accuracy in all cases is estimated through comparisons of ground-truth data and classification maps formed by using the compared methods. The reasons for the differences in these estimates are discussed.

  13. Ethnicity identification from face images

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.

    2004-08-01

    Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.

  14. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor

    PubMed Central

    Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung

    2018-01-01

    In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447

  15. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  16. Convolutional neural network with transfer learning for rice type classification

    NASA Astrophysics Data System (ADS)

    Patel, Vaibhav Amit; Joshi, Manjunath V.

    2018-04-01

    Presently, rice type is identified manually by humans, which is time consuming and error prone. Therefore, there is a need to do this by machine which makes it faster with greater accuracy. This paper proposes a deep learning based method for classification of rice types. We propose two methods to classify the rice types. In the first method, we train a deep convolutional neural network (CNN) using the given segmented rice images. In the second method, we train a combination of a pretrained VGG16 network and the proposed method, while using transfer learning in which the weights of a pretrained network are used to achieve better accuracy. Our approach can also be used for classification of rice grain as broken or fine. We train a 5-class model for classifying rice types using 4000 training images and another 2- class model for the classification of broken and normal rice using 1600 training images. We observe that despite having distinct rice images, our architecture, pretrained on ImageNet data boosts classification accuracy significantly.

  17. Color Image Classification Using Block Matching and Learning

    NASA Astrophysics Data System (ADS)

    Kondo, Kazuki; Hotta, Seiji

    In this paper, we propose block matching and learning for color image classification. In our method, training images are partitioned into small blocks. Given a test image, it is also partitioned into small blocks, and mean-blocks corresponding to each test block are calculated with neighbor training blocks. Our method classifies a test image into the class that has the shortest total sum of distances between mean blocks and test ones. We also propose a learning method for reducing memory requirement. Experimental results show that our classification outperforms other classifiers such as support vector machine with bag of keypoints.

  18. Subtyping of polyposis nasi: phenotypes, endotypes and comorbidities.

    PubMed

    Koennecke, Michael; Klimek, Ludger; Mullol, Joaquim; Gevaert, Philippe; Wollenberg, Barbara

    2018-01-01

    Chronic rhinosinusitis (CRS) is a heterogeneous, multifactorial inflammatory disease of the nasal and paranasal mucosa. It has not been possible to date to develop an internationally standardized, uniform classification for this disorder. A phenotype classification according to CRS with (CRSwNP) and without polyposis (CRSsNP) is usually made. However, a large number of studies have shown that there are also different endotypes of CRS within these phenotypes, with different pathophysiologies of chronic inflammation of the nasal mucosa. This review describes the central immunological processes in nasal polyps, as well as the impact of related diseases on the inflammatory profile of nasal polyps. The current knowledge on the immunological and molecular processes of CRS, in particular CRSwNP and its classification into specific endotypes, was put together by means of a structured literature search in Medline, PubMed, the national and international guideline registers, and the Cochrane Library. Based on the current literature, the different immunological processes in CRS and nasal polyps were elaborated and a graphical representation in the form of an immunological network developed. In addition, different inflammatory profiles can be found in CRSwNP depending on related diseases, such as bronchial asthma, cystic fibrosis (CF), or NASID-Exacerbated Respiratory Disease (N‑ERD). The identification of different endotypes of CRSwNP may help to improve diagnostics and develop novel individual treatment approaches in CRSwNP.

  19. [Object-oriented aquatic vegetation extracting approach based on visible vegetation indices.

    PubMed

    Jing, Ran; Deng, Lei; Zhao, Wen Ji; Gong, Zhao Ning

    2016-05-01

    Using the estimation of scale parameters (ESP) image segmentation tool to determine the ideal image segmentation scale, the optimal segmented image was created by the multi-scale segmentation method. Based on the visible vegetation indices derived from mini-UAV imaging data, we chose a set of optimal vegetation indices from a series of visible vegetation indices, and built up a decision tree rule. A membership function was used to automatically classify the study area and an aquatic vegetation map was generated. The results showed the overall accuracy of image classification using the supervised classification was 53.7%, and the overall accuracy of object-oriented image analysis (OBIA) was 91.7%. Compared with pixel-based supervised classification method, the OBIA method improved significantly the image classification result and further increased the accuracy of extracting the aquatic vegetation. The Kappa value of supervised classification was 0.4, and the Kappa value based OBIA was 0.9. The experimental results demonstrated that using visible vegetation indices derived from the mini-UAV data and OBIA method extracting the aquatic vegetation developed in this study was feasible and could be applied in other physically similar areas.

  20. Advances in Spectral-Spatial Classification of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2012-01-01

    Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation and contrast of the spatial structures present in the image. Then the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines using the available spectral information and the extracted spatial information. Spatial post-processing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple classifier system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

  1. Distinct subtypes of behavioral-variant frontotemporal dementia based on patterns of network degeneration

    PubMed Central

    Ranasinghe, Kamalini G; Rankin, Katherine P; Pressman, Peter S; Perry, David C; Lobach, Iryna V; Seeley, William W; Coppola, Giovanni; Karydas, Anna M; Grinberg, Lea T; Shany-Ur, Tal; Lee, Suzee E; Rabinovici, Gil D; Rosen, Howard J; Gorno-Tempini, Maria Luisa; Boxer, Adam L; Miller, Zachary A; Chiong, Winston; DeMay, Mary; Kramer, Joel H; Possin, Katherine L; Sturm, Virginia E; Bettcher, Brianne M; Neylan, Michael; Zackey, Diana D; Nguyen, Lauren A; Ketelle, Robin; Block, Nikolas; Wu, Teresa Q; Dallich, Alison; Russek, Natanya; Caplan, Alyssa; Geschwind, Daniel H; Vossel, Keith A; Miller, Bruce L

    2016-01-01

    Importance Clearer delineation of the phenotypic heterogeneity within behavioral variant frontotemporal dementia (bvFTD) will help uncover underlying biological mechanisms, and will improve clinicians’ ability to predict disease course and design targeted management strategies. Objective To identify subtypes of bvFTD syndrome based on distinctive patterns of atrophy defined by selective vulnerability of specific functional networks targeted in bvFTD, using statistical classification approaches. Design, Setting and Participants In this retrospective observational study, 104 patients meeting the Frontotemporal Dementia Consortium consensus criteria for bvFTD were evaluated at the Memory and Aging Center of Department of Neurology at University of California, San Francisco. Patients underwent a multidisciplinary clinical evaluation, including clinical demographics, genetic testing, symptom evaluation, neurological exam, neuropsychological bedside testing, and socioemotional assessments. Ninety patients underwent structural Magnetic Resonance Imaging at their earliest evaluation at the memory clinic. From each patients’ structural imaging, the mean volumes of 18 regions of interest (ROI) comprising the functional networks specifically vulnerable in bvFTD, including the ‘salience network’ (SN), with key nodes in the frontoinsula and pregenual anterior cingulate, and the ‘semantic appraisal network’ (SAN) anchored in the anterior temporal lobe and subgenual cingulate, were estimated. Principal component and cluster analyses of ROI volumes were used to identify patient clusters with anatomically distinct atrophy patterns. Main Outcome Measures We evaluated brain morphology and other clinical features including presenting symptoms, neurologic exam signs, neuropsychological performance, rate of dementia progression, and socioemotional function in each patient cluster. Results We identified four subgroups of bvFTD patients with distinct anatomic patterns of network degeneration, including two separate salience network–predominant subgroups: frontal/temporal (SN-FT), and frontal (SN-F), and a semantic appraisal network–predominant group (SAN), and a subcortical–predominant group. Subgroups demonstrated distinct patterns of cognitive, socioemotional, and motor symptoms, as well as genetic compositions and estimated rates of disease progression. Conclusions Divergent patterns of vulnerability in specific functional network components make an important contribution to clinical heterogeneity of bvFTD. The data-driven anatomical classification identifies biologically meaningful phenotypes and provides a replicable approach to disambiguate the bvFTD syndrome. PMID:27429218

  2. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

  3. Retrospective genotype-phenotype analysis in a 305 patient cohort referred for testing of a targeted epilepsy panel.

    PubMed

    Hesse, Andrew N; Bevilacqua, Jennifer; Shankar, Kritika; Reddi, Honey V

    2018-05-16

    Epilepsy is a diverse neurological condition with extreme genetic and phenotypic heterogeneity. The introduction of next-generation sequencing into the clinical laboratory has made it possible to investigate hundreds of associated genes simultaneously for a patient, even in the absence of a clearly defined syndrome. This has resulted in the detection of rare and novel mutations at a rate well beyond our ability to characterize their effects. This retrospective study reviews genotype data in the context of available phenotypic information on 305 patients spanning the epileptic spectrum to identify established and novel patterns of correlation. Our epilepsy panel comprising 377 genes was used to sequence 305 patients referred for genetic testing. Qualifying variants were annotated with phenotypic data obtained from either the test requisition form or supporting clinical documentation. Observed phenotypes were compared with established phenotypes in OMIM, published literature and the ILAEs 2010 report on genetic testing to assess congruity with known gene aberrations. We identified a number of novel and recognized genetic variants consistent with established epileptic phenotypes. Forty-one pathogenic or predicted deleterious variants were detected in 39 patients with accompanying clinical documentation. Twenty-five of these variants across 15 genes were novel. Furthermore, evaluation of phenotype data for 194 patients with variants of unknown significance in genes with autosomal dominant and X-linked disease inheritance elucidated potentially disease-causing variants that were not currently characterized in the literature. Assessment of key genotype-phenotype correlations from our cohort provide insight into variant classification, as well as the importance of including ILAE recommended genes as part of minimum panel content for comprehensive epilepsy tests. Many of the reported VUSs are likely genuine pathogenic variants driving the observed phenotypes, but not enough evidence is available for assertive classifications. Similar studies will provide more utility via mounting independent genotype-phenotype data from unrelated patients. The possible outcome would be a better molecular diagnostic product, with fewer indeterminate reports containing only VUSs. Copyright © 2018. Published by Elsevier B.V.

  4. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    PubMed Central

    Abbasi, Arash; Berry, Jeffrey C.; Callen, Steven T.; Chavez, Leonardo; Doust, Andrew N.; Feldman, Max J.; Gilbert, Kerrigan B.; Hodge, John G.; Hoyer, J. Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning. PMID:29209576

  5. PlantCV v2: Image analysis software for high-throughput plant phenotyping.

    PubMed

    Gehan, Malia A; Fahlgren, Noah; Abbasi, Arash; Berry, Jeffrey C; Callen, Steven T; Chavez, Leonardo; Doust, Andrew N; Feldman, Max J; Gilbert, Kerrigan B; Hodge, John G; Hoyer, J Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.

  6. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  7. Imaging techniques for visualizing and phenotyping congenital heart defects in murine models.

    PubMed

    Liu, Xiaoqin; Tobita, Kimimasa; Francis, Richard J B; Lo, Cecilia W

    2013-06-01

    Mouse model is ideal for investigating the genetic and developmental etiology of congenital heart disease. However, cardiovascular phenotyping for the precise diagnosis of structural heart defects in mice remain challenging. With rapid advances in imaging techniques, there are now high throughput phenotyping tools available for the diagnosis of structural heart defects. In this review, we discuss the efficacy of four different imaging modalities for congenital heart disease diagnosis in fetal/neonatal mice, including noninvasive fetal echocardiography, micro-computed tomography (micro-CT), micro-magnetic resonance imaging (micro-MRI), and episcopic fluorescence image capture (EFIC) histopathology. The experience we have gained in the use of these imaging modalities in a large-scale mouse mutagenesis screen have validated their efficacy for congenital heart defect diagnosis in the tiny hearts of fetal and newborn mice. These cutting edge phenotyping tools will be invaluable for furthering our understanding of the developmental etiology of congenital heart disease. Copyright © 2013 Wiley Periodicals, Inc.

  8. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE PAGES

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash; ...

    2017-12-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  9. Mammographic phenotypes of breast cancer risk driven by breast anatomy

    NASA Astrophysics Data System (ADS)

    Gastounioti, Aimilia; Oustimov, Andrew; Hsieh, Meng-Kang; Pantalone, Lauren; Conant, Emily F.; Kontos, Despina

    2017-03-01

    Image-derived features of breast parenchymal texture patterns have emerged as promising risk factors for breast cancer, paving the way towards personalized recommendations regarding women's cancer risk evaluation and screening. The main steps to extract texture features of the breast parenchyma are the selection of regions of interest (ROIs) where texture analysis is performed, the texture feature calculation and the texture feature summarization in case of multiple ROIs. In this study, we incorporate breast anatomy in these three key steps by (a) introducing breast anatomical sampling for the definition of ROIs, (b) texture feature calculation aligned with the structure of the breast and (c) weighted texture feature summarization considering the spatial position and the underlying tissue composition of each ROI. We systematically optimize this novel framework for parenchymal tissue characterization in a case-control study with digital mammograms from 424 women. We also compare the proposed approach with a conventional methodology, not considering breast anatomy, recently shown to enhance the case-control discriminatory capacity of parenchymal texture analysis. The case-control classification performance is assessed using elastic-net regression with 5-fold cross validation, where the evaluation measure is the area under the curve (AUC) of the receiver operating characteristic. Upon optimization, the proposed breast-anatomy-driven approach demonstrated a promising case-control classification performance (AUC=0.87). In the same dataset, the performance of conventional texture characterization was found to be significantly lower (AUC=0.80, DeLong's test p-value<0.05). Our results suggest that breast anatomy may further leverage the associations of parenchymal texture features with breast cancer, and may therefore be a valuable addition in pipelines aiming to elucidate quantitative mammographic phenotypes of breast cancer risk.

  10. Image aesthetic quality evaluation using convolution neural network embedded learning

    NASA Astrophysics Data System (ADS)

    Li, Yu-xin; Pu, Yuan-yuan; Xu, Dan; Qian, Wen-hua; Wang, Li-peng

    2017-11-01

    A way of embedded learning convolution neural network (ELCNN) based on the image content is proposed to evaluate the image aesthetic quality in this paper. Our approach can not only solve the problem of small-scale data but also score the image aesthetic quality. First, we chose Alexnet and VGG_S to compare for confirming which is more suitable for this image aesthetic quality evaluation task. Second, to further boost the image aesthetic quality classification performance, we employ the image content to train aesthetic quality classification models. But the training samples become smaller and only using once fine-tuning cannot make full use of the small-scale data set. Third, to solve the problem in second step, a way of using twice fine-tuning continually based on the aesthetic quality label and content label respective is proposed, the classification probability of the trained CNN models is used to evaluate the image aesthetic quality. The experiments are carried on the small-scale data set of Photo Quality. The experiment results show that the classification accuracy rates of our approach are higher than the existing image aesthetic quality evaluation approaches.

  11. Thyroid Nodule Classification in Ultrasound Images by Fine-Tuning Deep Convolutional Neural Network.

    PubMed

    Chi, Jianning; Walia, Ekta; Babyn, Paul; Wang, Jimmy; Groot, Gary; Eramian, Mark

    2017-08-01

    With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into "malignant" and "benign" cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.

  12. Temporal optimisation of image acquisition for land cover classification with Random Forest and MODIS time-series

    NASA Astrophysics Data System (ADS)

    Nitze, Ingmar; Barrett, Brian; Cawkwell, Fiona

    2015-02-01

    The analysis and classification of land cover is one of the principal applications in terrestrial remote sensing. Due to the seasonal variability of different vegetation types and land surface characteristics, the ability to discriminate land cover types changes over time. Multi-temporal classification can help to improve the classification accuracies, but different constraints, such as financial restrictions or atmospheric conditions, may impede their application. The optimisation of image acquisition timing and frequencies can help to increase the effectiveness of the classification process. For this purpose, the Feature Importance (FI) measure of the state-of-the art machine learning method Random Forest was used to determine the optimal image acquisition periods for a general (Grassland, Forest, Water, Settlement, Peatland) and Grassland specific (Improved Grassland, Semi-Improved Grassland) land cover classification in central Ireland based on a 9-year time-series of MODIS Terra 16 day composite data (MOD13Q1). Feature Importances for each acquisition period of the Enhanced Vegetation Index (EVI) and Normalised Difference Vegetation Index (NDVI) were calculated for both classification scenarios. In the general land cover classification, the months December and January showed the highest, and July and August the lowest separability for both VIs over the entire nine-year period. This temporal separability was reflected in the classification accuracies, where the optimal choice of image dates outperformed the worst image date by 13% using NDVI and 5% using EVI on a mono-temporal analysis. With the addition of the next best image periods to the data input the classification accuracies converged quickly to their limit at around 8-10 images. The binary classification schemes, using two classes only, showed a stronger seasonal dependency with a higher intra-annual, but lower inter-annual variation. Nonetheless anomalous weather conditions, such as the cold winter of 2009/2010 can alter the temporal separability pattern significantly. Due to the extensive use of the NDVI for land cover discrimination, the findings of this study should be transferrable to data from other optical sensors with a higher spatial resolution. However, the high impact of outliers from the general climatic pattern highlights the limitation of spatial transferability to locations with different climatic and land cover conditions. The use of high-temporal, moderate resolution data such as MODIS in conjunction with machine-learning techniques proved to be a good base for the prediction of image acquisition timing for optimal land cover classification results.

  13. Taxonomy of rare genetic metabolic bone disorders.

    PubMed

    Masi, L; Agnusdei, D; Bilezikian, J; Chappard, D; Chapurlat, R; Cianferotti, L; Devolgelaer, J-P; El Maghraoui, A; Ferrari, S; Javaid, M K; Kaufman, J-M; Liberman, U A; Lyritis, G; Miller, P; Napoli, N; Roldan, E; Papapoulos, S; Watts, N B; Brandi, M L

    2015-10-01

    This article reports a taxonomic classification of rare skeletal diseases based on metabolic phenotypes. It was prepared by The Skeletal Rare Diseases Working Group of the International Osteoporosis Foundation (IOF) and includes 116 OMIM phenotypes with 86 affected genes. Rare skeletal metabolic diseases comprise a group of diseases commonly associated with severe clinical consequences. In recent years, the description of the clinical phenotypes and radiographic features of several genetic bone disorders was paralleled by the discovery of key molecular pathways involved in the regulation of bone and mineral metabolism. Including this information in the description and classification of rare skeletal diseases may improve the recognition and management of affected patients. IOF recognized this need and formed a Skeletal Rare Diseases Working Group (SRD-WG) of basic and clinical scientists who developed a taxonomy of rare skeletal diseases based on their metabolic pathogenesis. This taxonomy of rare genetic metabolic bone disorders (RGMBDs) comprises 116 OMIM phenotypes, with 86 affected genes related to bone and mineral homeostasis. The diseases were divided into four major groups, namely, disorders due to altered osteoclast, osteoblast, or osteocyte activity; disorders due to altered bone matrix proteins; disorders due to altered bone microenvironmental regulators; and disorders due to deranged calciotropic hormonal activity. This article provides the first comprehensive taxonomy of rare metabolic skeletal diseases based on deranged metabolic activity. This classification will help in the development of common and shared diagnostic and therapeutic pathways for these patients and also in the creation of international registries of rare skeletal diseases, the first step for the development of genetic tests based on next generation sequencing and for performing large intervention trials to assess efficacy of orphan drugs.

  14. Lava Morphology Classification of a Fast-Spreading Ridge Using Deep-Towed Sonar Data: East Pacific Rise

    NASA Astrophysics Data System (ADS)

    Meyer, J.; White, S.

    2005-05-01

    Classification of lava morphology on a regional scale contributes to the understanding of the distribution and extent of lava flows at a mid-ocean ridge. Seafloor classification is essential to understand the regional undersea environment at midocean ridges. In this study, the development of a classification scheme is found to identify and extract textural patterns of different lava morphologies along the East Pacific Rise using DSL-120 side-scan and ARGO camera imagery. Application of an accurate image classification technique to side-scan sonar allows us to expand upon the locally available visual ground reference data to make the first comprehensive regional maps of small-scale lava morphology present at a mid-ocean ridge. The submarine lava morphologies focused upon in this study; sheet flows, lobate flows, and pillow flows; have unique textures. Several algorithms were applied to the sonar backscatter intensity images to produce multiple textural image layers useful in distinguishing the different lava morphologies. The intensity and spatially enhanced images were then combined and applied to a hybrid classification technique. The hybrid classification involves two integrated classifiers, a rule-based expert system classifier and a machine learning classifier. The complementary capabilities of the two integrated classifiers provided a higher accuracy of regional seafloor classification compared to using either classifier alone. Once trained, the hybrid classifier can then be applied to classify neighboring images with relative ease. This classification technique has been used to map the lava morphology distribution and infer spatial variability of lava effusion rates along two segments of the East Pacific Rise, 17 deg S and 9 deg N. Future use of this technique may also be useful for attaining temporal information. Repeated documentation of morphology classification in this dynamic environment can be compared to detect regional seafloor change.

  15. Multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement

    NASA Astrophysics Data System (ADS)

    Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing

    2018-02-01

    For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.

  16. Multi-material classification of dry recyclables from municipal solid waste based on thermal imaging.

    PubMed

    Gundupalli, Sathish Paulraj; Hait, Subrata; Thakur, Atul

    2017-12-01

    There has been a significant rise in municipal solid waste (MSW) generation in the last few decades due to rapid urbanization and industrialization. Due to the lack of source segregation practice, a need for automated segregation of recyclables from MSW exists in the developing countries. This paper reports a thermal imaging based system for classifying useful recyclables from simulated MSW sample. Experimental results have demonstrated the possibility to use thermal imaging technique for classification and a robotic system for sorting of recyclables in a single process step. The reported classification system yields an accuracy in the range of 85-96% and is comparable with the existing single-material recyclable classification techniques. We believe that the reported thermal imaging based system can emerge as a viable and inexpensive large-scale classification-cum-sorting technology in recycling plants for processing MSW in developing countries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.

  18. Effectiveness of Global Features for Automatic Medical Image Classification and Retrieval – the experiences of OHSU at ImageCLEFmed

    PubMed Central

    Kalpathy-Cramer, Jayashree; Hersh, William

    2008-01-01

    In 2006 and 2007, Oregon Health & Science University (OHSU) participated in the automatic image annotation task for medical images at ImageCLEF, an annual international benchmarking event that is part of the Cross Language Evaluation Forum (CLEF). The goal of the automatic annotation task was to classify 1000 test images based on the Image Retrieval in Medical Applications (IRMA) code, given a set of 10,000 training images. There were 116 distinct classes in 2006 and 2007. We evaluated the efficacy of a variety of primarily global features for this classification task. These included features based on histograms, gray level correlation matrices and the gist technique. A multitude of classifiers including k-nearest neighbors, two-level neural networks, support vector machines, and maximum likelihood classifiers were evaluated. Our official error rates for the 1000 test images were 26% in 2006 using the flat classification structure. The error count in 2007 was 67.8 using the hierarchical classification error computation based on the IRMA code in 2007. Confusion matrices as well as clustering experiments were used to identify visually similar classes. The use of the IRMA code did not help us in the classification task as the semantic hierarchy of the IRMA classes did not correspond well with the hierarchy based on clustering of image features that we used. Our most frequent misclassification errors were along the view axis. Subsequent experiments based on a two-stage classification system decreased our error rate to 19.8% for the 2006 dataset and our error count to 55.4 for the 2007 data. PMID:19884953

  19. Realizing parameterless automatic classification of remote sensing imagery using ontology engineering and cyberinfrastructure techniques

    NASA Astrophysics Data System (ADS)

    Sun, Ziheng; Fang, Hui; Di, Liping; Yue, Peng

    2016-09-01

    It was an untouchable dream for remote sensing experts to realize total automatic image classification without inputting any parameter values. Experts usually spend hours and hours on tuning the input parameters of classification algorithms in order to obtain the best results. With the rapid development of knowledge engineering and cyberinfrastructure, a lot of data processing and knowledge reasoning capabilities become online accessible, shareable and interoperable. Based on these recent improvements, this paper presents an idea of parameterless automatic classification which only requires an image and automatically outputs a labeled vector. No parameters and operations are needed from endpoint consumers. An approach is proposed to realize the idea. It adopts an ontology database to store the experiences of tuning values for classifiers. A sample database is used to record training samples of image segments. Geoprocessing Web services are used as functionality blocks to finish basic classification steps. Workflow technology is involved to turn the overall image classification into a total automatic process. A Web-based prototypical system named PACS (Parameterless Automatic Classification System) is implemented. A number of images are fed into the system for evaluation purposes. The results show that the approach could automatically classify remote sensing images and have a fairly good average accuracy. It is indicated that the classified results will be more accurate if the two databases have higher quality. Once the experiences and samples in the databases are accumulated as many as an expert has, the approach should be able to get the results with similar quality to that a human expert can get. Since the approach is total automatic and parameterless, it can not only relieve remote sensing workers from the heavy and time-consuming parameter tuning work, but also significantly shorten the waiting time for consumers and facilitate them to engage in image classification activities. Currently, the approach is used only on high resolution optical three-band remote sensing imagery. The feasibility using the approach on other kinds of remote sensing images or involving additional bands in classification will be studied in future.

  20. Automatic plankton image classification combining multiple view features via multiple kernel learning.

    PubMed

    Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing

    2017-12-28

    Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system outperforms state-of-the-art plankton image classification systems in terms of accuracy and robustness. This study demonstrated automatic plankton image classification system combining multiple view features using multiple kernel learning. The results indicated that multiple view features combined by NLMKL using three kernel functions (linear, polynomial and Gaussian kernel functions) can describe and use information of features better so that achieve a higher classification accuracy.

  1. Deep Convolutional Neural Network-Based Early Automated Detection of Diabetic Retinopathy Using Fundus Image.

    PubMed

    Xu, Kele; Feng, Dawei; Mi, Haibo

    2017-11-23

    The automatic detection of diabetic retinopathy is of vital importance, as it is the main cause of irreversible vision loss in the working-age population in the developed world. The early detection of diabetic retinopathy occurrence can be very helpful for clinical treatment; although several different feature extraction approaches have been proposed, the classification task for retinal images is still tedious even for those trained clinicians. Recently, deep convolutional neural networks have manifested superior performance in image classification compared to previous handcrafted feature-based image classification methods. Thus, in this paper, we explored the use of deep convolutional neural network methodology for the automatic classification of diabetic retinopathy using color fundus image, and obtained an accuracy of 94.5% on our dataset, outperforming the results obtained by using classical approaches.

  2. Classification algorithm of lung lobe for lung disease cases based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Matsuhiro, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Mishima, M.; Ohmatsu, H.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2011-03-01

    With the development of multi-slice CT technology, to obtain an accurate 3D image of lung field in a short time is possible. To support that, a lot of image processing methods need to be developed. In clinical setting for diagnosis of lung cancer, it is important to study and analyse lung structure. Therefore, classification of lung lobe provides useful information for lung cancer analysis. In this report, we describe algorithm which classify lungs into lung lobes for lung disease cases from multi-slice CT images. The classification algorithm of lung lobes is efficiently carried out using information of lung blood vessel, bronchus, and interlobar fissure. Applying the classification algorithms to multi-slice CT images of 20 normal cases and 5 lung disease cases, we demonstrate the usefulness of the proposed algorithms.

  3. A novel 3D imaging system for strawberry phenotyping.

    PubMed

    He, Joe Q; Harrison, Richard J; Li, Bo

    2017-01-01

    Accurate and quantitative phenotypic data in plant breeding programmes is vital in breeding to assess the performance of genotypes and to make selections. Traditional strawberry phenotyping relies on the human eye to assess most external fruit quality attributes, which is time-consuming and subjective. 3D imaging is a promising high-throughput technique that allows multiple external fruit quality attributes to be measured simultaneously. A low cost multi-view stereo (MVS) imaging system was developed, which captured data from 360° around a target strawberry fruit. A 3D point cloud of the sample was derived and analysed with custom-developed software to estimate berry height, length, width, volume, calyx size, colour and achene number. Analysis of these traits in 100 fruits showed good concordance with manual assessment methods. This study demonstrates the feasibility of an MVS based 3D imaging system for the rapid and quantitative phenotyping of seven agronomically important external strawberry traits. With further improvement, this method could be applied in strawberry breeding programmes as a cost effective phenotyping technique.

  4. Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.

    PubMed

    Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping

    2018-03-23

    Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.

  5. Human Expertise Helps Computer Classify Images

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1991-01-01

    Two-domain method of computational classification of images requires less computation than other methods for computational recognition, matching, or classification of images or patterns. Does not require explicit computational matching of features, and incorporates human expertise without requiring translation of mental processes of classification into language comprehensible to computer. Conceived to "train" computer to analyze photomicrographs of microscope-slide specimens of leucocytes from human peripheral blood to distinguish between specimens from healthy and specimens from traumatized patients.

  6. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  7. A liver cirrhosis classification on B-mode ultrasound images by the use of higher order local autocorrelation features

    NASA Astrophysics Data System (ADS)

    Sasaki, Kenya; Mitani, Yoshihiro; Fujita, Yusuke; Hamamoto, Yoshihiko; Sakaida, Isao

    2017-02-01

    In this paper, in order to classify liver cirrhosis on regions of interest (ROIs) images from B-mode ultrasound images, we have proposed to use the higher order local autocorrelation (HLAC) features. In a previous study, we tried to classify liver cirrhosis by using a Gabor filter based approach. However, the classification performance of the Gabor feature was poor from our preliminary experimental results. In order accurately to classify liver cirrhosis, we examined to use the HLAC features for liver cirrhosis classification. The experimental results show the effectiveness of HLAC features compared with the Gabor feature. Furthermore, by using a binary image made by an adaptive thresholding method, the classification performance of HLAC features has improved.

  8. Deep learning for tumor classification in imaging mass spectrometry.

    PubMed

    Behrmann, Jens; Etmann, Christian; Boskamp, Tobias; Casadonte, Rita; Kriegsmann, Jörg; Maaß, Peter

    2018-04-01

    Tumor classification using imaging mass spectrometry (IMS) data has a high potential for future applications in pathology. Due to the complexity and size of the data, automated feature extraction and classification steps are required to fully process the data. Since mass spectra exhibit certain structural similarities to image data, deep learning may offer a promising strategy for classification of IMS data as it has been successfully applied to image classification. Methodologically, we propose an adapted architecture based on deep convolutional networks to handle the characteristics of mass spectrometry data, as well as a strategy to interpret the learned model in the spectral domain based on a sensitivity analysis. The proposed methods are evaluated on two algorithmically challenging tumor classification tasks and compared to a baseline approach. Competitiveness of the proposed methods is shown on both tasks by studying the performance via cross-validation. Moreover, the learned models are analyzed by the proposed sensitivity analysis revealing biologically plausible effects as well as confounding factors of the considered tasks. Thus, this study may serve as a starting point for further development of deep learning approaches in IMS classification tasks. https://gitlab.informatik.uni-bremen.de/digipath/Deep_Learning_for_Tumor_Classification_in_IMS. jbehrmann@uni-bremen.de or christianetmann@uni-bremen.de. Supplementary data are available at Bioinformatics online.

  9. Spectral-Spatial Classification of Hyperspectral Images Using Hierarchical Optimization

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new spectral-spatial method for hyperspectral data classification is proposed. For a given hyperspectral image, probabilistic pixelwise classification is first applied. Then, hierarchical step-wise optimization algorithm is performed, by iteratively merging neighboring regions with the smallest Dissimilarity Criterion (DC) and recomputing class labels for new regions. The DC is computed by comparing region mean vectors, class labels and a number of pixels in the two regions under consideration. The algorithm is converged when all the pixels get involved in the region merging procedure. Experimental results are presented on two remote sensing hyperspectral images acquired by the AVIRIS and ROSIS sensors. The proposed approach improves classification accuracies and provides maps with more homogeneous regions, when compared to previously proposed classification techniques.

  10. BOREAS TE-18 Landsat TM Maximum Likelihood Classification Image of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team focused its efforts on using remotely sensed data to characterize the successional and disturbance dynamics of the boreal forest for use in carbon modeling. The objective of this classification is to provide the BOREAS investigators with a data product that characterizes the land cover of the NSA. A Landsat-5 TM image from 20-Aug-1988 was used to derive this classification. A standard supervised maximum likelihood classification approach was used to produce this classification. The data are provided in a binary image format file. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  11. Disease Modeling via Large-Scale Network Analysis

    DTIC Science & Technology

    2015-05-20

    SECURITY CLASSIFICATION OF: A central goal of genetics is to learn how the genotype of an organism determines its phenotype. We address the implicit...guarantees for the methods. In the past, we have developed predictive methods general enough to apply to potentially any genetic trait, varying from... genetics is to learn how the genotype of an organism determines its phenotype. We address the implicit problem of predicting the association of genes with

  12. Breast MRI radiomics: comparison of computer- and human-extracted imaging phenotypes.

    PubMed

    Sutton, Elizabeth J; Huang, Erich P; Drukker, Karen; Burnside, Elizabeth S; Li, Hui; Net, Jose M; Rao, Arvind; Whitman, Gary J; Zuley, Margarita; Ganott, Marie; Bonaccio, Ermelinda; Giger, Maryellen L; Morris, Elizabeth A

    2017-01-01

    In this study, we sought to investigate if computer-extracted magnetic resonance imaging (MRI) phenotypes of breast cancer could replicate human-extracted size and Breast Imaging-Reporting and Data System (BI-RADS) imaging phenotypes using MRI data from The Cancer Genome Atlas (TCGA) project of the National Cancer Institute. Our retrospective interpretation study involved analysis of Health Insurance Portability and Accountability Act-compliant breast MRI data from The Cancer Imaging Archive, an open-source database from the TCGA project. This study was exempt from institutional review board approval at Memorial Sloan Kettering Cancer Center and the need for informed consent was waived. Ninety-one pre-operative breast MRIs with verified invasive breast cancers were analysed. Three fellowship-trained breast radiologists evaluated the index cancer in each case according to size and the BI-RADS lexicon for shape, margin, and enhancement (human-extracted image phenotypes [HEIP]). Human inter-observer agreement was analysed by the intra-class correlation coefficient (ICC) for size and Krippendorff's α for other measurements. Quantitative MRI radiomics of computerised three-dimensional segmentations of each cancer generated computer-extracted image phenotypes (CEIP). Spearman's rank correlation coefficients were used to compare HEIP and CEIP. Inter-observer agreement for HEIP varied, with the highest agreement seen for size (ICC 0.679) and shape (ICC 0.527). The computer-extracted maximum linear size replicated the human measurement with p  < 10 -12 . CEIP of shape, specifically sphericity and irregularity, replicated HEIP with both p values < 0.001. CEIP did not demonstrate agreement with HEIP of tumour margin or internal enhancement. Quantitative radiomics of breast cancer may replicate human-extracted tumour size and BI-RADS imaging phenotypes, thus enabling precision medicine.

  13. Sunspot Pattern Classification using PCA and Neural Networks (Poster)

    NASA Technical Reports Server (NTRS)

    Rajkumar, T.; Thompson, D. E.; Slater, G. L.

    2005-01-01

    The sunspot classification scheme presented in this paper is considered as a 2-D classification problem on archived datasets, and is not a real-time system. As a first step, it mirrors the Zuerich/McIntosh historical classification system and reproduces classification of sunspot patterns based on preprocessing and neural net training datasets. Ultimately, the project intends to move from more rudimentary schemes, to develop spatial-temporal-spectral classes derived by correlating spatial and temporal variations in various wavelengths to the brightness fluctuation spectrum of the sun in those wavelengths. Once the approach is generalized, then the focus will naturally move from a 2-D to an n-D classification, where "n" includes time and frequency. Here, the 2-D perspective refers both to the actual SOH0 Michelson Doppler Imager (MDI) images that are processed, but also refers to the fact that a 2-D matrix is created from each image during preprocessing. The 2-D matrix is the result of running Principal Component Analysis (PCA) over the selected dataset images, and the resulting matrices and their eigenvalues are the objects that are stored in a database, classified, and compared. These matrices are indexed according to the standard McIntosh classification scheme.

  14. Geographical classification of apple based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Zhao, Chunjiang; Peng, Yankun

    2013-05-01

    Attribute of apple according to geographical origin is often recognized and appreciated by the consumers. It is usually an important factor to determine the price of a commercial product. Hyperspectral imaging technology and supervised pattern recognition was attempted to discriminate apple according to geographical origins in this work. Hyperspectral images of 207 Fuji apple samples were collected by hyperspectral camera (400-1000nm). Principal component analysis (PCA) was performed on hyperspectral imaging data to determine main efficient wavelength images, and then characteristic variables were extracted by texture analysis based on gray level co-occurrence matrix (GLCM) from dominant waveband image. All characteristic variables were obtained by fusing the data of images in efficient spectra. Support vector machine (SVM) was used to construct the classification model, and showed excellent performance in classification results. The total classification rate had the high classify accuracy of 92.75% in the training set and 89.86% in the prediction sets, respectively. The overall results demonstrated that the hyperspectral imaging technique coupled with SVM classifier can be efficiently utilized to discriminate Fuji apple according to geographical origins.

  15. A new phenotyping pipeline reveals three types of lateral roots and a random branching pattern in two cereals.

    PubMed

    Passot, Sixtine; Moreno-Ortega, Beatriz; Moukouanga, Daniel; Balsera, Crispulo; Guyomarc'h, Soazig; Lucas, Mikael; Lobet, Guillaume; Laplaze, Laurent; Muller, Bertrand; Guédon, Yann

    2018-05-11

    Recent progress in root phenotyping has focused mainly on increasing throughput for genetic studies while identifying root developmental patterns has been comparatively underexplored. We introduce a new phenotyping pipeline for producing high-quality spatio-temporal root system development data and identifying developmental patterns within these data. The SmartRoot image analysis system and temporal and spatial statistical models were applied to two cereals, pearl millet (Pennisetum glaucum) and maize (Zea mays). Semi-Markov switching linear models were used to cluster lateral roots based on their growth rate profiles. These models revealed three types of lateral roots with similar characteristics in both species. The first type corresponds to fast and accelerating roots, the second to rapidly arrested roots, and the third to an intermediate type where roots cease elongation after a few days. These types of lateral roots were retrieved in different proportions in a maize mutant affected in auxin signaling, while the first most vigorous type was absent in maize plants exposed to severe shading. Moreover, the classification of growth rate profiles was mirrored by a ranking of anatomical traits in pearl millet. Potential dependencies in the succession of lateral root types along the primary root were then analyzed using variable-order Markov chains. The lateral root type was not influenced by the shootward neighbor root type or by the distance from this root. This random branching pattern of primary roots was remarkably conserved, despite the high variability of root systems in both species. Our phenotyping pipeline opens the door to exploring the genetic variability of lateral root developmental patterns. {copyright, serif} 2018 American Society of Plant Biologists. All rights reserved.

  16. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    PubMed

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  17. Plus Disease in Retinopathy of Prematurity: A Continuous Spectrum of Vascular Abnormality as a Basis of Diagnostic Variability.

    PubMed

    Campbell, J Peter; Kalpathy-Cramer, Jayashree; Erdogmus, Deniz; Tian, Peng; Kedarisetti, Dharanish; Moleta, Chace; Reynolds, James D; Hutcheson, Kelly; Shapiro, Michael J; Repka, Michael X; Ferrone, Philip; Drenser, Kimberly; Horowitz, Jason; Sonmez, Kemal; Swan, Ryan; Ostmo, Susan; Jonas, Karyn E; Chan, R V Paul; Chiang, Michael F

    2016-11-01

    To identify patterns of interexpert discrepancy in plus disease diagnosis in retinopathy of prematurity (ROP). We developed 2 datasets of clinical images as part of the Imaging and Informatics in ROP study and determined a consensus reference standard diagnosis (RSD) for each image based on 3 independent image graders and the clinical examination results. We recruited 8 expert ROP clinicians to classify these images and compared the distribution of classifications between experts and the RSD. Eight participating experts with more than 10 years of clinical ROP experience and more than 5 peer-reviewed ROP publications who analyzed images obtained during routine ROP screening in neonatal intensive care units. Expert classification of images of plus disease in ROP. Interexpert agreement (weighted κ statistic) and agreement and bias on ordinal classification between experts (analysis of variance [ANOVA]) and the RSD (percent agreement). There was variable interexpert agreement on diagnostic classifications between the 8 experts and the RSD (weighted κ, 0-0.75; mean, 0.30). The RSD agreement ranged from 80% to 94% for the dataset of 100 images and from 29% to 79% for the dataset of 34 images. However, when images were ranked in order of disease severity (by average expert classification), the pattern of expert classification revealed a consistent systematic bias for each expert consistent with unique cut points for the diagnosis of plus disease and preplus disease. The 2-way ANOVA model suggested a highly significant effect of both image and user on the average score (dataset A: P < 0.05 and adjusted R 2  = 0.82; and dataset B: P < 0.05 and adjusted R 2  = 0.6615). There is wide variability in the classification of plus disease by ROP experts, which occurs because experts have different cut points for the amounts of vascular abnormality required for presence of plus and preplus disease. This has important implications for research, teaching, and patient care for ROP and suggests that a continuous ROP plus disease severity score may reflect more accurately the behavior of expert ROP clinicians and may better standardize classification in the future. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  18. A review of supervised object-based land-cover image classification

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue

    2017-08-01

    Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial vehicle) or agricultural sites where it also correlates with the number of targeted classes. More than 95.6% of studies involve an area less than 300 ha, and the spatial resolution of images is predominantly between 0 and 2 m. Furthermore, we identify some methods that may advance supervised object-based image classification. For example, deep learning and type-2 fuzzy techniques may further improve classification accuracy. Lastly, scientists are strongly encouraged to report results of uncertainty studies to further explore the effects of varied factors on supervised object-based image classification.

  19. Rock classification based on resistivity patterns in electrical borehole wall images

    NASA Astrophysics Data System (ADS)

    Linek, Margarete; Jungmann, Matthias; Berlage, Thomas; Pechnig, Renate; Clauser, Christoph

    2007-06-01

    Electrical borehole wall images represent grey-level-coded micro-resistivity measurements at the borehole wall. Different scientific methods have been implemented to transform image data into quantitative log curves. We introduce a pattern recognition technique applying texture analysis, which uses second-order statistics based on studying the occurrence of pixel pairs. We calculate so-called Haralick texture features such as contrast, energy, entropy and homogeneity. The supervised classification method is used for assigning characteristic texture features to different rock classes and assessing the discriminative power of these image features. We use classifiers obtained from training intervals to characterize the entire image data set recovered in ODP hole 1203A. This yields a synthetic lithology profile based on computed texture data. We show that Haralick features accurately classify 89.9% of the training intervals. We obtained misclassification for vesicular basaltic rocks. Hence, further image analysis tools are used to improve the classification reliability. We decompose the 2D image signal by the application of wavelet transformation in order to enhance image objects horizontally, diagonally and vertically. The resulting filtered images are used for further texture analysis. This combined classification based on Haralick features and wavelet transformation improved our classification up to a level of 98%. The application of wavelet transformation increases the consistency between standard logging profiles and texture-derived lithology. Texture analysis of borehole wall images offers the potential to facilitate objective analysis of multiple boreholes with the same lithology.

  20. Development of Automated Image Analysis Software for Suspended Marine Particle Classification

    DTIC Science & Technology

    2002-09-30

    Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...and global water column. 1 OBJECTIVES The project’s objective is to develop automated image analysis software to reduce the effort and time

  1. IMPACTS OF PATCH SIZE AND LANDSCAPE HETEROGENEITY ON THEMATIC IMAGE CLASSIFICATION ACCURACY

    EPA Science Inventory

    Impacts of Patch Size and Landscape Heterogeneity on Thematic Image Classification Accuracy.
    Currently, most thematic accuracy assessments of classified remotely sensed images oily account for errors between the various classes employed, at particular pixels of interest, thu...

  2. A Comprehensive Study of Retinal Vessel Classification Methods in Fundus Images

    PubMed Central

    Miri, Maliheh; Amini, Zahra; Rabbani, Hossein; Kafieh, Raheleh

    2017-01-01

    Nowadays, it is obvious that there is a relationship between changes in the retinal vessel structure and diseases such as diabetic, hypertension, stroke, and the other cardiovascular diseases in adults as well as retinopathy of prematurity in infants. Retinal fundus images provide non-invasive visualization of the retinal vessel structure. Applying image processing techniques in the study of digital color fundus photographs and analyzing their vasculature is a reliable approach for early diagnosis of the aforementioned diseases. Reduction in the arteriolar–venular ratio of retina is one of the primary signs of hypertension, diabetic, and cardiovascular diseases which can be calculated by analyzing the fundus images. To achieve a precise measuring of this parameter and meaningful diagnostic results, accurate classification of arteries and veins is necessary. Classification of vessels in fundus images faces with some challenges that make it difficult. In this paper, a comprehensive study of the proposed methods for classification of arteries and veins in fundus images is presented. Considering that these methods are evaluated on different datasets and use different evaluation criteria, it is not possible to conduct a fair comparison of their performance. Therefore, we evaluate the classification methods from modeling perspective. This analysis reveals that most of the proposed approaches have focused on statistics, and geometric models in spatial domain and transform domain models have received less attention. This could suggest the possibility of using transform models, especially data adaptive ones, for modeling of the fundus images in future classification approaches. PMID:28553578

  3. Multi-Feature Classification of Multi-Sensor Satellite Imagery Based on Dual-Polarimetric Sentinel-1A, Landsat-8 OLI, and Hyperion Images for Urban Land-Cover Classification.

    PubMed

    Zhou, Tao; Li, Zhaofu; Pan, Jianjun

    2018-01-27

    This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively.

  4. Automatic classification and detection of clinically relevant images for diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Xu, Xinyu; Li, Baoxin

    2008-03-01

    We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.

  5. Inferring gene dependency network specific to phenotypic alteration based on gene expression data and clinical information of breast cancer.

    PubMed

    Zhou, Xionghui; Liu, Juan

    2014-01-01

    Although many methods have been proposed to reconstruct gene regulatory network, most of them, when applied in the sample-based data, can not reveal the gene regulatory relations underlying the phenotypic change (e.g. normal versus cancer). In this paper, we adopt phenotype as a variable when constructing the gene regulatory network, while former researches either neglected it or only used it to select the differentially expressed genes as the inputs to construct the gene regulatory network. To be specific, we integrate phenotype information with gene expression data to identify the gene dependency pairs by using the method of conditional mutual information. A gene dependency pair (A,B) means that the influence of gene A on the phenotype depends on gene B. All identified gene dependency pairs constitute a directed network underlying the phenotype, namely gene dependency network. By this way, we have constructed gene dependency network of breast cancer from gene expression data along with two different phenotype states (metastasis and non-metastasis). Moreover, we have found the network scale free, indicating that its hub genes with high out-degrees may play critical roles in the network. After functional investigation, these hub genes are found to be biologically significant and specially related to breast cancer, which suggests that our gene dependency network is meaningful. The validity has also been justified by literature investigation. From the network, we have selected 43 discriminative hubs as signature to build the classification model for distinguishing the distant metastasis risks of breast cancer patients, and the result outperforms those classification models with published signatures. In conclusion, we have proposed a promising way to construct the gene regulatory network by using sample-based data, which has been shown to be effective and accurate in uncovering the hidden mechanism of the biological process and identifying the gene signature for phenotypic change.

  6. Pipeline for illumination correction of images for high-throughput microscopy.

    PubMed

    Singh, S; Bray, M-A; Jones, T R; Carpenter, A E

    2014-12-01

    The presence of systematic noise in images in high-throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non-homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high-content screen readouts due to software-based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real-world high-throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z'-factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high-content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post-hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open-source image analysis pipelines publicly available. This software-based solution has the potential to improve outcomes for a wide-variety of image-based HTS experiments. © 2014 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  7. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.

    PubMed

    Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A

    2017-03-01

    Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  8. A coarse-to-fine approach for medical hyperspectral image classification with sparse representation

    NASA Astrophysics Data System (ADS)

    Chang, Lan; Zhang, Mengmeng; Li, Wei

    2017-10-01

    A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.

  9. A spectral-structural bag-of-features scene classifier for very high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhao, Bei; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral-structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental results show that the whole image as the scope of the pooling operator performs better than the scope generated by SP. In addition, SSBFC codes and pools the spectral and structural features separately to avoid mutual interruption between the spectral and structural features. The coding vectors of spectral and structural features are then concatenated into a final coding vector. Finally, SSBFC classifies the final coding vector by support vector machine (SVM) with a histogram intersection kernel (HIK). Compared with the latest scene classification methods, the experimental results with three VHSR datasets demonstrate that the proposed SSBFC performs better than the other classification methods for VHSR image scenes.

  10. Advances in Spectral-Spatial Classification of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Fauvel, Mathieu; Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2012-01-01

    Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation, and contrast of the spatial structures present in the image. Then, the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines (SVMs) using the available spectral information and the extracted spatial information. Spatial postprocessing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple-classifier (MC) system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral–spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods.

  11. Semi-automatic digital image impact assessments of Maize Lethal Necrosis (MLN) at the leaf, whole plant and plot levels

    NASA Astrophysics Data System (ADS)

    Kefauver, S. C.; Vergara-Diaz, O.; El-Haddad, G.; Das, B.; Suresh, L. M.; Cairns, J.; Araus, J. L.

    2016-12-01

    Maize is the top staple crop for low-income populations in Sub-Saharan Africa and is currently suffering from the appearance of new diseases, which, together with increased abiotic stresses from climate change, are challenging the very sustainability of African societies. Current constraints in field phenotyping remain a major bottleneck for future breeding advances, but RGB-based High-Throughput Phenotyping Platforms (HTPPs) have demonstrated promise for rapidly developing both disease-resistant and weather-resilient crops. RGB HTTPs have proven cost-effective in studies assessing the effect of abiotic stresses, but have yet to be fully exploited to phenotype disease resistance. RGB image quantification using different alternate color space transforms, including BreedPix indices, were produced as part of a FIJI plug-in (http://fiji.sc/Fiji; http://github.com/george-haddad/CIMMYT). For validation, Maize Lethal Necrosis (MLN) visual scale impact assessments from 1 to 5 were scored by the resident CIMMYT plant pathologist, with 1 being MLN resistant (healthy plants with no visual symptoms) and 5 being totally susceptible (entirely necrotic with no green tissue). Individual RGB vegetation indexes outperformed NDVI (Normalized Difference Vegetation Index), with correlation values up to 0.72, compared to 0.56 for NDVI. Specifically, Hue, Green Area (GA), and the Normalized Green Red Difference Index (NGRDI) consistently outperformed NDVI in estimating MLN disease severity. In multivariate linear and various decision tree models, Necrosis Area (NA) and Chlorosis Area (CA), calculated similar to GA and GGA from Breedpix, also contributed significantly to estimating MLN impact scores. Results using UAS (Unmanned Aerial Systems), proximal field photography of plants and plots and flatbed scanners of individual leaves have produced similar results, demonstrating the robustness of these cost-effective RGB indexes. Furthermore, the application of the indices using classification and regression trees and conditional inference trees allows for their immediate implementation within the same open-source plugin for providing real time tools to crop breeders.

  12. A signature dissimilarity measure for trabecular bone texture in knee radiographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woloszynski, T.; Podsiadlo, P.; Stachowiak, G. W.

    Purpose: The purpose of this study is to develop a dissimilarity measure for the classification of trabecular bone (TB) texture in knee radiographs. Problems associated with the traditional extraction and selection of texture features and with the invariance to imaging conditions such as image size, anisotropy, noise, blur, exposure, magnification, and projection angle were addressed. Methods: In the method developed, called a signature dissimilarity measure (SDM), a sum of earth mover's distances calculated for roughness and orientation signatures is used to quantify dissimilarities between textures. Scale-space theory was used to ensure scale and rotation invariance. The effects of image size,more » anisotropy, noise, and blur on the SDM developed were studied using computer generated fractal texture images. The invariance of the measure to image exposure, magnification, and projection angle was studied using x-ray images of human tibia head. For the studies, Mann-Whitney tests with significance level of 0.01 were used. A comparison study between the performances of a SDM based classification system and other two systems in the classification of Brodatz textures and the detection of knee osteoarthritis (OA) were conducted. The other systems are based on weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM) and local binary patterns (LBP). Results: Results obtained indicate that the SDM developed is invariant to image exposure (2.5-30 mA s), magnification (x1.00-x1.35), noise associated with film graininess and quantum mottle (<25%), blur generated by a sharp film screen, and image size (>64x64 pixels). However, the measure is sensitive to changes in projection angle (>5 deg.), image anisotropy (>30 deg.), and blur generated by a regular film screen. For the classification of Brodatz textures, the SDM based system produced comparable results to the LBP system. For the detection of knee OA, the SDM based system achieved 78.8% classification accuracy and outperformed the WND-CHARM system (64.2%). Conclusions: The SDM is well suited for the classification of TB texture images in knee OA detection and may be useful for the texture classification of medical images in general.« less

  13. A COMPARISON OF INTER-ANALYST DIFFERENCES IN THE CLASSIFICATION OF A LANDSAT TEM+ SCENE IN SOUTH-CENTRAL VIRGINIA

    EPA Science Inventory

    This study examined inter-analyst classification variability based on training site signature selection only for six classifications from a 10 km2 Landsat ETM+ image centered over a highly heterogeneous area in south-central Virginia. Six analysts classified the image...

  14. Land cover and forest formation distributions for St. Kitts, Nevis, St. Eustatius, Grenada and Barbados from decision tree classification of cloud-cleared satellite imagery. Caribbean Journal of Science. 44(2):175-198.

    Treesearch

    E.H. Helmer; T.A. Kennaway; D.H. Pedreros; M.L. Clark; H. Marcano-Vega; L.L. Tieszen; S.R. Schill; C.M.S. Carrington

    2008-01-01

    Satellite image-based mapping of tropical forests is vital to conservation planning. Standard methods for automated image classification, however, limit classification detail in complex tropical landscapes. In this study, we test an approach to Landsat image interpretation on four islands of the Lesser Antilles, including Grenada and St. Kitts, Nevis and St. Eustatius...

  15. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  16. Classification of radiolarian images with hand-crafted and deep features

    NASA Astrophysics Data System (ADS)

    Keçeli, Ali Seydi; Kaya, Aydın; Keçeli, Seda Uzunçimen

    2017-12-01

    Radiolarians are planktonic protozoa and are important biostratigraphic and paleoenvironmental indicators for paleogeographic reconstructions. Radiolarian paleontology still remains as a low cost and the one of the most convenient way to obtain dating of deep ocean sediments. Traditional methods for identifying radiolarians are time-consuming and cannot scale to the granularity or scope necessary for large-scale studies. Automated image classification will allow making these analyses promptly. In this study, a method for automatic radiolarian image classification is proposed on Scanning Electron Microscope (SEM) images of radiolarians to ease species identification of fossilized radiolarians. The proposed method uses both hand-crafted features like invariant moments, wavelet moments, Gabor features, basic morphological features and deep features obtained from a pre-trained Convolutional Neural Network (CNN). Feature selection is applied over deep features to reduce high dimensionality. Classification outcomes are analyzed to compare hand-crafted features, deep features, and their combinations. Results show that the deep features obtained from a pre-trained CNN are more discriminative comparing to hand-crafted ones. Additionally, feature selection utilizes to the computational cost of classification algorithms and have no negative effect on classification accuracy.

  17. HEp-2 cell image classification method based on very deep convolutional networks with small datasets

    NASA Astrophysics Data System (ADS)

    Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping

    2017-07-01

    Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.

  18. A modified method for MRF segmentation and bias correction of MR image with intensity inhomogeneity.

    PubMed

    Xie, Mei; Gao, Jingjing; Zhu, Chongjin; Zhou, Yan

    2015-01-01

    Markov random field (MRF) model is an effective method for brain tissue classification, which has been applied in MR image segmentation for decades. However, it falls short of the expected classification in MR images with intensity inhomogeneity for the bias field is not considered in the formulation. In this paper, we propose an interleaved method joining a modified MRF classification and bias field estimation in an energy minimization framework, whose initial estimation is based on k-means algorithm in view of prior information on MRI. The proposed method has a salient advantage of overcoming the misclassifications from the non-interleaved MRF classification for the MR image with intensity inhomogeneity. In contrast to other baseline methods, experimental results also have demonstrated the effectiveness and advantages of our algorithm via its applications in the real and the synthetic MR images.

  19. Combining High Spatial Resolution Optical and LIDAR Data for Object-Based Image Classification

    NASA Astrophysics Data System (ADS)

    Li, R.; Zhang, T.; Geng, R.; Wang, L.

    2018-04-01

    In order to classify high spatial resolution images more accurately, in this research, a hierarchical rule-based object-based classification framework was developed based on a high-resolution image with airborne Light Detection and Ranging (LiDAR) data. The eCognition software is employed to conduct the whole process. In detail, firstly, the FBSP optimizer (Fuzzy-based Segmentation Parameter) is used to obtain the optimal scale parameters for different land cover types. Then, using the segmented regions as basic units, the classification rules for various land cover types are established according to the spectral, morphological and texture features extracted from the optical images, and the height feature from LiDAR respectively. Thirdly, the object classification results are evaluated by using the confusion matrix, overall accuracy and Kappa coefficients. As a result, a method using the combination of an aerial image and the airborne Lidar data shows higher accuracy.

  20. Singular spectrum decomposition of Bouligand-Minkowski fractal descriptors: an application to the classification of texture Images

    NASA Astrophysics Data System (ADS)

    Florindo, João. Batista

    2018-04-01

    This work proposes the use of Singular Spectrum Analysis (SSA) for the classification of texture images, more specifically, to enhance the performance of the Bouligand-Minkowski fractal descriptors in this task. Fractal descriptors are known to be a powerful approach to model and particularly identify complex patterns in natural images. Nevertheless, the multiscale analysis involved in those descriptors makes them highly correlated. Although other attempts to address this point was proposed in the literature, none of them investigated the relation between the fractal correlation and the well-established analysis employed in time series. And SSA is one of the most powerful techniques for this purpose. The proposed method was employed for the classification of benchmark texture images and the results were compared with other state-of-the-art classifiers, confirming the potential of this analysis in image classification.

  1. The clinical application of fMRI data in a single-patient diagnostic conundrum: Classifying brain response to experimental pain to distinguish between gastrointestinal, depressive and eating disorder symptoms.

    PubMed

    Strigo, Irina A; Murray, Stuart B; Simmons, Alan N; Bernard, Rebecca S; Huang, Jeannie S; Kaye, Walter H

    2017-11-01

    Patients with eating disorders (EDs) often present with psychiatric comorbidity, and functional and/or organic gastrointestinal (GI) symptomatology. Such multidiagnostic presentations can complicate diagnostic practice and treatment delivery. Here we describe an adolescent patient who presented with mixed ED, depressive, and GI symptomatology, who had received multiple contrasting diagnoses throughout treatment. We used a novel machine learning approach to classify (i) the patient's functional brain imaging during an experimental pain paradigm, and (ii) patient self-report psychological measures, to categorize the diagnostic phenotype most closely approximated by the patient. Specifically, we found that the patient's response to pain anticipation and experience within the insula and anterior cingulate cortices, and patient self-report data, were most consistent with patients with GI pain. This work is the first to demonstrate the possibility of using imaging data, alongside supervised learning models, for purposes of single patient classification in those with ED symptomatology, where diagnostic comorbidity is common. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. High-Content Microscopy Analysis of Subcellular Structures: Assay Development and Application to Focal Adhesion Quantification.

    PubMed

    Kroll, Torsten; Schmidt, David; Schwanitz, Georg; Ahmad, Mubashir; Hamann, Jana; Schlosser, Corinne; Lin, Yu-Chieh; Böhm, Konrad J; Tuckermann, Jan; Ploubidou, Aspasia

    2016-07-01

    High-content analysis (HCA) converts raw light microscopy images to quantitative data through the automated extraction, multiparametric analysis, and classification of the relevant information content. Combined with automated high-throughput image acquisition, HCA applied to the screening of chemicals or RNAi-reagents is termed high-content screening (HCS). Its power in quantifying cell phenotypes makes HCA applicable also to routine microscopy. However, developing effective HCA and bioinformatic analysis pipelines for acquisition of biologically meaningful data in HCS is challenging. Here, the step-by-step development of an HCA assay protocol and an HCS bioinformatics analysis pipeline are described. The protocol's power is demonstrated by application to focal adhesion (FA) detection, quantitative analysis of multiple FA features, and functional annotation of signaling pathways regulating FA size, using primary data of a published RNAi screen. The assay and the underlying strategy are aimed at researchers performing microscopy-based quantitative analysis of subcellular features, on a small scale or in large HCS experiments. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  3. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    PubMed

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  4. Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification

    PubMed Central

    Hou, Le; Samaras, Dimitris; Kurc, Tahsin M.; Gao, Yi; Davis, James E.; Saltz, Joel H.

    2016-01-01

    Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN. PMID:27795661

  5. Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification

    NASA Astrophysics Data System (ADS)

    Gao, G.; Zhang, M.; Gu, Y.

    2017-05-01

    Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".

  6. Contextual convolutional neural networks for lung nodule classification using Gaussian-weighted average image patches

    NASA Astrophysics Data System (ADS)

    Lee, Haeil; Lee, Hansang; Park, Minseok; Kim, Junmo

    2017-03-01

    Lung cancer is the most common cause of cancer-related death. To diagnose lung cancers in early stages, numerous studies and approaches have been developed for cancer screening with computed tomography (CT) imaging. In recent years, convolutional neural networks (CNN) have become one of the most common and reliable techniques in computer aided detection (CADe) and diagnosis (CADx) by achieving state-of-the-art-level performances for various tasks. In this study, we propose a CNN classification system for false positive reduction of initially detected lung nodule candidates. First, image patches of lung nodule candidates are extracted from CT scans to train a CNN classifier. To reflect the volumetric contextual information of lung nodules to 2D image patch, we propose a weighted average image patch (WAIP) generation by averaging multiple slice images of lung nodule candidates. Moreover, to emphasize central slices of lung nodules, slice images are locally weighted according to Gaussian distribution and averaged to generate the 2D WAIP. With these extracted patches, 2D CNN is trained to achieve the classification of WAIPs of lung nodule candidates into positive and negative labels. We used LUNA 2016 public challenge database to validate the performance of our approach for false positive reduction in lung CT nodule classification. Experiments show our approach improves the classification accuracy of lung nodules compared to the baseline 2D CNN with patches from single slice image.

  7. Phenotype Instance Verification and Evaluation Tool (PIVET): A Scaled Phenotype Evidence Generation Framework Using Web-Based Medical Literature.

    PubMed

    Henderson, Jette; Ke, Junyuan; Ho, Joyce C; Ghosh, Joydeep; Wallace, Byron C

    2018-05-04

    Researchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making. The objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved. PIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET's phenotype representation with PheKnow-Cloud's by using PheKnow-Cloud's experimental setup. In PIVET's framework, we also introduce a statistical model trained on domain expert-verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner. PIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with which PheKnow-Cloud was originally developed, but PIVET's analysis is an order of magnitude faster than that of PheKnow-Cloud. Not only is PIVET much faster, it can be scaled to a larger corpus and still retain speed. We evaluated multiple classification models on top of the PIVET framework and found ridge regression to perform best, realizing an average F1 score of 0.91 when predicting clinically relevant phenotypes. Our study shows that PIVET improves on the most notable existing computational tool for phenotype validation in terms of speed and automation and is comparable in terms of accuracy. ©Jette Henderson, Junyuan Ke, Joyce C Ho, Joydeep Ghosh, Byron C Wallace. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.05.2018.

  8. Phenotype Instance Verification and Evaluation Tool (PIVET): A Scaled Phenotype Evidence Generation Framework Using Web-Based Medical Literature

    PubMed Central

    Ke, Junyuan; Ho, Joyce C; Ghosh, Joydeep; Wallace, Byron C

    2018-01-01

    Background Researchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making. Objective The objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved. Methods PIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET’s phenotype representation with PheKnow-Cloud’s by using PheKnow-Cloud’s experimental setup. In PIVET’s framework, we also introduce a statistical model trained on domain expert–verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner. Results PIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with which PheKnow-Cloud was originally developed, but PIVET’s analysis is an order of magnitude faster than that of PheKnow-Cloud. Not only is PIVET much faster, it can be scaled to a larger corpus and still retain speed. We evaluated multiple classification models on top of the PIVET framework and found ridge regression to perform best, realizing an average F1 score of 0.91 when predicting clinically relevant phenotypes. Conclusions Our study shows that PIVET improves on the most notable existing computational tool for phenotype validation in terms of speed and automation and is comparable in terms of accuracy. PMID:29728351

  9. How automated image analysis techniques help scientists in species identification and classification?

    PubMed

    Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder

    2017-09-04

    Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.

  10. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  11. Development of Automated Image Analysis Software for Suspended Marine Particle Classification

    DTIC Science & Technology

    2003-09-30

    Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...REPORT TYPE 3. DATES COVERED 00-00-2003 to 00-00-2003 4. TITLE AND SUBTITLE Development of Automated Image Analysis Software for Suspended...objective is to develop automated image analysis software to reduce the effort and time required for manual identification of plankton images. Automated

  12. Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.

    PubMed

    Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie

    2016-07-01

    Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.

  13. Holistic and component plant phenotyping using temporal image sequence.

    PubMed

    Das Choudhury, Sruti; Bashyam, Srinidhi; Qiu, Yumou; Samal, Ashok; Awada, Tala

    2018-01-01

    Image-based plant phenotyping facilitates the extraction of traits noninvasively by analyzing large number of plants in a relatively short period of time. It has the potential to compute advanced phenotypes by considering the whole plant as a single object (holistic phenotypes) or as individual components, i.e., leaves and the stem (component phenotypes), to investigate the biophysical characteristics of the plants. The emergence timing, total number of leaves present at any point of time and the growth of individual leaves during vegetative stage life cycle of the maize plants are significant phenotypic expressions that best contribute to assess the plant vigor. However, image-based automated solution to this novel problem is yet to be explored. A set of new holistic and component phenotypes are introduced in this paper. To compute the component phenotypes, it is essential to detect the individual leaves and the stem. Thus, the paper introduces a novel method to reliably detect the leaves and the stem of the maize plants by analyzing 2-dimensional visible light image sequences captured from the side using a graph based approach. The total number of leaves are counted and the length of each leaf is measured for all images in the sequence to monitor leaf growth. To evaluate the performance of the proposed algorithm, we introduce University of Nebraska-Lincoln Component Plant Phenotyping Dataset (UNL-CPPD) and provide ground truth to facilitate new algorithm development and uniform comparison. The temporal variation of the component phenotypes regulated by genotypes and environment (i.e., greenhouse) are experimentally demonstrated for the maize plants on UNL-CPPD. Statistical models are applied to analyze the greenhouse environment impact and demonstrate the genetic regulation of the temporal variation of the holistic phenotypes on the public dataset called Panicoid Phenomap-1. The central contribution of the paper is a novel computer vision based algorithm for automated detection of individual leaves and the stem to compute new component phenotypes along with a public release of a benchmark dataset, i.e., UNL-CPPD. Detailed experimental analyses are performed to demonstrate the temporal variation of the holistic and component phenotypes in maize regulated by environment and genetic variation with a discussion on their significance in the context of plant science.

  14. Comparison of staging diagnosis by two magnifying endoscopy classification for superficial oesophageal cancer.

    PubMed

    Ebi, Masahide; Shimura, Takaya; Murakami, Kenji; Yamada, Tomonori; Hirata, Yoshikazu; Tsukamoto, Hironobu; Mizoshita, Tsutomu; Tanida, Satoshi; Kataoka, Hiromi; Kamiya, Takeshi; Joh, Takashi

    2012-11-01

    Due to the possibility of lymph node metastasis, surgical resection is indicated for superficial oesophageal cancer with invasion to a depth greater than the muscularis mucosa. Although two magnifying endoscopy classifications are currently used to diagnose the depth of invasion, which classification is more suitable remains controversial. To compare and evaluate the clinical outcomes of two classifications for superficial oesophageal squamous cell carcinoma. This cross-sectional study consists of 44 superficial oesophageal squamous cell carcinoma lesions with magnification image-enhanced endoscopy images. Only magnifying endoscopic images were displayed to two experienced endoscopists who independently diagnosed the depth of invasion according to both classifications. The sensitivity of invasion greater than the muscularis mucosa tended to be higher in Inoue's classification than Arima's classification (78.3±6.2% vs. 50.0±3.0%; P=0.144), whereas the specificity was significantly lower in Inoue's classification than in Arima's classification (61.9±0.0% vs. 97.6±3.4%; P=0.043). For both classifications, rates of concordance were 90.9% and 84.4%, and κ statistics were 0.81 and 0.66, respectively. Our results suggest that Arima's classification is suitable for general screening before treatment to avoid unnecessary surgery. Inoue's classification is appropriate for assessing wide lesion. Copyright © 2012 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  15. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.

  16. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach

    PubMed Central

    Aerts, Hugo J. W. L.; Velazquez, Emmanuel Rios; Leijenaar, Ralph T. H.; Parmar, Chintan; Grossmann, Patrick; Cavalho, Sara; Bussink, Johan; Monshouwer, René; Haibe-Kains, Benjamin; Rietveld, Derek; Hoebers, Frank; Rietbergen, Michelle M.; Leemans, C. René; Dekker, Andre; Quackenbush, John; Gillies, Robert J.; Lambin, Philippe

    2014-01-01

    Human cancers exhibit strong phenotypic differences that can be visualized noninvasively by medical imaging. Radiomics refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features. Here we present a radiomic analysis of 440 features quantifying tumour image intensity, shape and texture, which are extracted from computed tomography data of 1,019 patients with lung or head-and-neck cancer. We find that a large number of radiomic features have prognostic power in independent data sets of lung and head-and-neck cancer patients, many of which were not identified as significant before. Radiogenomics analysis reveals that a prognostic radiomic signature, capturing intratumour heterogeneity, is associated with underlying gene-expression patterns. These data suggest that radiomics identifies a general prognostic phenotype existing in both lung and head-and-neck cancer. This may have a clinical impact as imaging is routinely used in clinical practice, providing an unprecedented opportunity to improve decision-support in cancer treatment at low cost. PMID:24892406

  17. Large-scale image-based profiling of single-cell phenotypes in arrayed CRISPR-Cas9 gene perturbation screens.

    PubMed

    de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas

    2018-01-23

    High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  18. Digital Biomass Accumulation Using High-Throughput Plant Phenotype Data Analysis.

    PubMed

    Rahaman, Md Matiur; Ahsan, Md Asif; Gillani, Zeeshan; Chen, Ming

    2017-09-01

    Biomass is an important phenotypic trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive, and they require numerous individuals to be cultivated for repeated measurements. With the advent of image-based high-throughput plant phenotyping facilities, non-destructive biomass measuring methods have attempted to overcome this problem. Thus, the estimation of plant biomass of individual plants from their digital images is becoming more important. In this paper, we propose an approach to biomass estimation based on image derived phenotypic traits. Several image-based biomass studies state that the estimation of plant biomass is only a linear function of the projected plant area in images. However, we modeled the plant volume as a function of plant area, plant compactness, and plant age to generalize the linear biomass model. The obtained results confirm the proposed model and can explain most of the observed variance during image-derived biomass estimation. Moreover, a small difference was observed between actual and estimated digital biomass, which indicates that our proposed approach can be used to estimate digital biomass accurately.

  19. Landcover Classification Using Deep Fully Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, J.; Li, X.; Zhou, S.; Tang, J.

    2017-12-01

    Land cover classification has always been an essential application in remote sensing. Certain image features are needed for land cover classification whether it is based on pixel or object-based methods. Different from other machine learning methods, deep learning model not only extracts useful information from multiple bands/attributes, but also learns spatial characteristics. In recent years, deep learning methods have been developed rapidly and widely applied in image recognition, semantic understanding, and other application domains. However, there are limited studies applying deep learning methods in land cover classification. In this research, we used fully convolutional networks (FCN) as the deep learning model to classify land covers. The National Land Cover Database (NLCD) within the state of Kansas was used as training dataset and Landsat images were classified using the trained FCN model. We also applied an image segmentation method to improve the original results from the FCN model. In addition, the pros and cons between deep learning and several machine learning methods were compared and explored. Our research indicates: (1) FCN is an effective classification model with an overall accuracy of 75%; (2) image segmentation improves the classification results with better match of spatial patterns; (3) FCN has an excellent ability of learning which can attains higher accuracy and better spatial patterns compared with several machine learning methods.

  20. Lissencephaly: expanded imaging and clinical classification

    PubMed Central

    Di Donato, Nataliya; Chiari, Sara; Mirzaa, Ghayda M.; Aldinger, Kimberly; Parrini, Elena; Olds, Carissa; Barkovich, A. James; Guerrini, Renzo; Dobyns, William B.

    2017-01-01

    Lissencephaly (“smooth brain”, LIS) is a malformation of cortical development associated with deficient neuronal migration and abnormal formation of cerebral convolutions or gyri. The LIS spectrum includes agyria, pachygyria, and subcortical band heterotopia. Our first classification of LIS and subcortical band heterotopia (SBH) was developed to distinguish between the first two genetic causes of LIS – LIS1 (PAFAH1B1) and DCX. However, progress in molecular genetics has led to identification of 19 LIS-associated genes, leaving the existing classification system insufficient to distinguish the increasingly diverse patterns of LIS. To address this challenge, we reviewed clinical, imaging and molecular data on 188 patients with LIS-SBH ascertained during the last five years, and reviewed selected archival data on another ~1,400 patients. Using these data plus published reports, we constructed a new imaging based classification system with 21 recognizable patterns that reliably predict the most likely causative genes. These patterns do not correlate consistently with the clinical outcome, leading us to also develop a new scale useful for predicting clinical severity and outcome. Taken together, our work provides new tools that should prove useful for clinical management and genetic counselling of patients with LIS-SBH (imaging and severity based classifications), and guidance for prioritizing and interpreting genetic testing results (imaging based classification). PMID:28440899

  1. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  2. Integrative image segmentation optimization and machine learning approach for high quality land-use and land-cover mapping using multisource remote sensing data

    NASA Astrophysics Data System (ADS)

    Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd

    2018-01-01

    The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.

  3. Spectral-spatial hyperspectral image classification using super-pixel-based spatial pyramid representation

    NASA Astrophysics Data System (ADS)

    Fan, Jiayuan; Tan, Hui Li; Toomik, Maria; Lu, Shijian

    2016-10-01

    Spatial pyramid matching has demonstrated its power for image recognition task by pooling features from spatially increasingly fine sub-regions. Motivated by the concept of feature pooling at multiple pyramid levels, we propose a novel spectral-spatial hyperspectral image classification approach using superpixel-based spatial pyramid representation. This technique first generates multiple superpixel maps by decreasing the superpixel number gradually along with the increased spatial regions for labelled samples. By using every superpixel map, sparse representation of pixels within every spatial region is then computed through local max pooling. Finally, features learned from training samples are aggregated and trained by a support vector machine (SVM) classifier. The proposed spectral-spatial hyperspectral image classification technique has been evaluated on two public hyperspectral datasets, including the Indian Pines image containing 16 different agricultural scene categories with a 20m resolution acquired by AVIRIS and the University of Pavia image containing 9 land-use categories with a 1.3m spatial resolution acquired by the ROSIS-03 sensor. Experimental results show significantly improved performance compared with the state-of-the-art works. The major contributions of this proposed technique include (1) a new spectral-spatial classification approach to generate feature representation for hyperspectral image, (2) a complementary yet effective feature pooling approach, i.e. the superpixel-based spatial pyramid representation that is used for the spatial correlation study, (3) evaluation on two public hyperspectral image datasets with superior image classification performance.

  4. Biomedical image classification based on a cascade of an SVM with a reject option and subspace analysis.

    PubMed

    Lin, Dongyun; Sun, Lei; Toh, Kar-Ann; Zhang, Jing Bo; Lin, Zhiping

    2018-05-01

    Automated biomedical image classification could confront the challenges of high level noise, image blur, illumination variation and complicated geometric correspondence among various categorical biomedical patterns in practice. To handle these challenges, we propose a cascade method consisting of two stages for biomedical image classification. At stage 1, we propose a confidence score based classification rule with a reject option for a preliminary decision using the support vector machine (SVM). The testing images going through stage 1 are separated into two groups based on their confidence scores. Those testing images with sufficiently high confidence scores are classified at stage 1 while the others with low confidence scores are rejected and fed to stage 2. At stage 2, the rejected images from stage 1 are first processed by a subspace analysis technique called eigenfeature regularization and extraction (ERE), and then classified by another SVM trained in the transformed subspace learned by ERE. At both stages, images are represented based on two types of local features, i.e., SIFT and SURF, respectively. They are encoded using various bag-of-words (BoW) models to handle biomedical patterns with and without geometric correspondence, respectively. Extensive experiments are implemented to evaluate the proposed method on three benchmark real-world biomedical image datasets. The proposed method significantly outperforms several competing state-of-the-art methods in terms of classification accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques.

    PubMed

    Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L; Bilello, Michel; O'Rourke, Donald M; Davatzikos, Christos

    2016-03-01

    MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood-brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Texture classification of lung computed tomography images

    NASA Astrophysics Data System (ADS)

    Pheng, Hang See; Shamsuddin, Siti M.

    2013-03-01

    Current development of algorithms in computer-aided diagnosis (CAD) scheme is growing rapidly to assist the radiologist in medical image interpretation. Texture analysis of computed tomography (CT) scans is one of important preliminary stage in the computerized detection system and classification for lung cancer. Among different types of images features analysis, Haralick texture with variety of statistical measures has been used widely in image texture description. The extraction of texture feature values is essential to be used by a CAD especially in classification of the normal and abnormal tissue on the cross sectional CT images. This paper aims to compare experimental results using texture extraction and different machine leaning methods in the classification normal and abnormal tissues through lung CT images. The machine learning methods involve in this assessment are Artificial Immune Recognition System (AIRS), Naive Bayes, Decision Tree (J48) and Backpropagation Neural Network. AIRS is found to provide high accuracy (99.2%) and sensitivity (98.0%) in the assessment. For experiments and testing purpose, publicly available datasets in the Reference Image Database to Evaluate Therapy Response (RIDER) are used as study cases.

  7. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.

  9. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  10. Contour classification in thermographic images for detection of breast cancer

    NASA Astrophysics Data System (ADS)

    Okuniewski, Rafał; Nowak, Robert M.; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz; Oleszkiewicz, Witold

    2016-09-01

    Thermographic images of breast taken by the Braster device are uploaded into web application which uses different classification algorithms to automatically decide whether a patient should be more thoroughly examined. This article presents the approach to the task of classifying contours visible on thermographic images of breast taken by the Braster device in order to make the decision about the existence of cancerous tumors in breast. It presents the results of the researches conducted on the different classification algorithms.

  11. Distance Metric between 3D Models and 2D Images for Recognition and Classification

    DTIC Science & Technology

    1992-07-01

    and 2D Image__ for Recognition and Classification D TIC Ronen Basri and Daphna Weinshall ELECTE JAN2 91993’ Abstract C Similarity measurements...Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-19-J-4038. Ronen Basri is supported by the...Distance Metric Between 3D Models and 2D Images for N00014-85-K-0124 Recognition and Classification N00014-91-J-4038 6. AUTHOR(S) Ronen Basri and

  12. Classification of the Gabon SAR Mosaic Using a Wavelet Based Rule Classifier

    NASA Technical Reports Server (NTRS)

    Simard, Marc; Saatchi, Sasan; DeGrandi, Gianfranco

    2000-01-01

    A method is developed for semi-automated classification of SAR images of the tropical forest. Information is extracted using the wavelet transform (WT). The transform allows for extraction of structural information in the image as a function of scale. In order to classify the SAR image, a Desicion Tree Classifier is used. The method of pruning is used to optimize classification rate versus tree size. The results give explicit insight on the type of information useful for a given class.

  13. EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.

    1994-01-01

    Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available evaluation criteria basically compare the observed results with the expected results. For the image reconstruction processes of registration and compression, the expected results are usually the original data or some selected characteristics of the original data. For classification processes the expected result is the ground truth of the scene. Thus, the comparison process consists of determining what changes occur in processing, where the changes occur, how much change occurs, and the amplitude of the change. The package includes evaluation routines for performing such comparisons as average uncertainty, average information transfer, chi-square statistics, multidimensional histograms, and computation of contingency matrices. This collection of routines is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 662K of 8 bit bytes. This collection of image processing and evaluation routines was developed in 1979.

  14. Classification of multiple sclerosis lesions using adaptive dictionary learning.

    PubMed

    Deshpande, Hrishikesh; Maurel, Pierre; Barillot, Christian

    2015-12-01

    This paper presents a sparse representation and an adaptive dictionary learning based method for automated classification of multiple sclerosis (MS) lesions in magnetic resonance (MR) images. Manual delineation of MS lesions is a time-consuming task, requiring neuroradiology experts to analyze huge volume of MR data. This, in addition to the high intra- and inter-observer variability necessitates the requirement of automated MS lesion classification methods. Among many image representation models and classification methods that can be used for such purpose, we investigate the use of sparse modeling. In the recent years, sparse representation has evolved as a tool in modeling data using a few basis elements of an over-complete dictionary and has found applications in many image processing tasks including classification. We propose a supervised classification approach by learning dictionaries specific to the lesions and individual healthy brain tissues, which include white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF). The size of the dictionaries learned for each class plays a major role in data representation but it is an even more crucial element in the case of competitive classification. Our approach adapts the size of the dictionary for each class, depending on the complexity of the underlying data. The algorithm is validated using 52 multi-sequence MR images acquired from 13 MS patients. The results demonstrate the effectiveness of our approach in MS lesion classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Multi-Temporal Classification and Change Detection Using Uav Images

    NASA Astrophysics Data System (ADS)

    Makuti, S.; Nex, F.; Yang, M. Y.

    2018-05-01

    In this paper different methodologies for the classification and change detection of UAV image blocks are explored. UAV is not only the cheapest platform for image acquisition but it is also the easiest platform to operate in repeated data collections over a changing area like a building construction site. Two change detection techniques have been evaluated in this study: the pre-classification and the post-classification algorithms. These methods are based on three main steps: feature extraction, classification and change detection. A set of state of the art features have been used in the tests: colour features (HSV), textural features (GLCM) and 3D geometric features. For classification purposes Conditional Random Field (CRF) has been used: the unary potential was determined using the Random Forest algorithm while the pairwise potential was defined by the fully connected CRF. In the performed tests, different feature configurations and settings have been considered to assess the performance of these methods in such challenging task. Experimental results showed that the post-classification approach outperforms the pre-classification change detection method. This was analysed using the overall accuracy, where by post classification have an accuracy of up to 62.6 % and the pre classification change detection have an accuracy of 46.5 %. These results represent a first useful indication for future works and developments.

  16. Median Filter Noise Reduction of Image and Backpropagation Neural Network Model for Cervical Cancer Classification

    NASA Astrophysics Data System (ADS)

    Wutsqa, D. U.; Marwah, M.

    2017-06-01

    In this paper, we consider spatial operation median filter to reduce the noise in the cervical images yielded by colposcopy tool. The backpropagation neural network (BPNN) model is applied to the colposcopy images to classify cervical cancer. The classification process requires an image extraction by using a gray level co-occurrence matrix (GLCM) method to obtain image features that are used as inputs of BPNN model. The advantage of noise reduction is evaluated by comparing the performances of BPNN models with and without spatial operation median filter. The experimental result shows that the spatial operation median filter can improve the accuracy of the BPNN model for cervical cancer classification.

  17. Semiautomated confocal imaging of fungal pathogenesis on plants: Microscopic analysis of macroscopic specimens.

    PubMed

    Minker, Katharine R; Biedrzycki, Meredith L; Kolagunda, Abhishek; Rhein, Stephen; Perina, Fabiano J; Jacobs, Samuel S; Moore, Michael; Jamann, Tiffany M; Yang, Qin; Nelson, Rebecca; Balint-Kurti, Peter; Kambhamettu, Chandra; Wisser, Randall J; Caplan, Jeffrey L

    2018-02-01

    The study of phenotypic variation in plant pathogenesis provides fundamental information about the nature of disease resistance. Cellular mechanisms that alter pathogenesis can be elucidated with confocal microscopy; however, systematic phenotyping platforms-from sample processing to image analysis-to investigate this do not exist. We have developed a platform for 3D phenotyping of cellular features underlying variation in disease development by fluorescence-specific resolution of host and pathogen interactions across time (4D). A confocal microscopy phenotyping platform compatible with different maize-fungal pathosystems (fungi: Setosphaeria turcica, Cochliobolus heterostrophus, and Cercospora zeae-maydis) was developed. Protocols and techniques were standardized for sample fixation, optical clearing, species-specific combinatorial fluorescence staining, multisample imaging, and image processing for investigation at the macroscale. The sample preparation methods presented here overcome challenges to fluorescence imaging such as specimen thickness and topography as well as physiological characteristics of the samples such as tissue autofluorescence and presence of cuticle. The resulting imaging techniques provide interesting qualitative and quantitative information not possible with conventional light or electron 2D imaging. Microsc. Res. Tech., 81:141-152, 2018. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. Visual Exploration of Genetic Association with Voxel-based Imaging Phenotypes in an MCI/AD Study

    PubMed Central

    Kim, Sungeun; Shen, Li; Saykin, Andrew J.; West, John D.

    2010-01-01

    Neuroimaging genomics is a new transdisciplinary research field, which aims to examine genetic effects on brain via integrated analyses of high throughput neuroimaging and genomic data. We report our recent work on (1) developing an imaging genomic browsing system that allows for whole genome and entire brain analyses based on visual exploration and (2) applying the system to the imaging genomic analysis of an existing MCI/AD cohort. Voxel-based morphometry is used to define imaging phenotypes. ANCOVA is employed to evaluate the effect of the interaction of genotypes and diagnosis in relation to imaging phenotypes while controlling for relevant covariates. Encouraging experimental results suggest that the proposed system has substantial potential for enabling discovery of imaging genomic associations through visual evaluation and for localizing candidate imaging regions and genomic regions for refined statistical modeling. PMID:19963597

  19. Patterns of magnetic resonance imaging abnormalities in symptomatic patients with Krabbe disease correspond to phenotype.

    PubMed

    Abdelhalim, Ahmed N; Alberico, Ronald A; Barczykowski, Amy L; Duffner, Patricia K

    2014-02-01

    Initial magnetic resonance imaging studies of individuals with Krabbe disease were analyzed to determine whether the pattern of abnormalities corresponded to the phenotype. This was a retrospective, nonblinded study. Families/patients diagnosed with Krabbe disease submitted medical records and magnetic resonance imaging discs for central review. Institutional review board approval/informed consents were obtained. Sixty-four magnetic resonance imaging scans were reviewed by two neuroradiologists and a child neurologist according to phenotype: early infantile (onset 0-6 months) = 39 patients; late infantile (onset 7-12 months) = 10 patients; later onset (onset 13 months-10 years) = 11 patients; adolescent (onset 11-20 years) = one patient; and adult (21 years or greater) = three patients. Local interpretations were compared with central review. Magnetic resonance imaging abnormalities differed among phenotypes. Early infantile patients had a predominance of increased intensity in the dentate/cerebellar white matter as well as changes in the deep cerebral white matter. Later onset patients did not demonstrate involvement in the dentate/cerebellar white matter but had extensive involvement of the deep cerebral white matter, parieto-occipital region, and posterior corpus callosum. Late infantile patients exhibited a mixed pattern; 40% had dentate/cerebellar white matter involvement while all had involvement of the deep cerebral white matter. Adolescent/adult patients demonstrated isolated corticospinal tract involvement. Local and central reviews primarily differed in interpretation of the early infantile phenotype. Analysis of magnetic resonance imaging in a large cohort of symptomatic patients with Krabbe disease demonstrated imaging abnormalities correspond to specific phenotypes. Knowledge of these patterns along with typical clinical signs/symptoms should promote earlier diagnosis and facilitate treatment. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Defining the clinical course of multiple sclerosis

    PubMed Central

    Reingold, Stephen C.; Cohen, Jeffrey A.; Cutter, Gary R.; Sørensen, Per Soelberg; Thompson, Alan J.; Wolinsky, Jerry S.; Balcer, Laura J.; Banwell, Brenda; Barkhof, Frederik; Bebo, Bruce; Calabresi, Peter A.; Clanet, Michel; Comi, Giancarlo; Fox, Robert J.; Freedman, Mark S.; Goodman, Andrew D.; Inglese, Matilde; Kappos, Ludwig; Kieseier, Bernd C.; Lincoln, John A.; Lubetzki, Catherine; Miller, Aaron E.; Montalban, Xavier; O'Connor, Paul W.; Petkau, John; Pozzilli, Carlo; Rudick, Richard A.; Sormani, Maria Pia; Stüve, Olaf; Waubant, Emmanuelle; Polman, Chris H.

    2014-01-01

    Accurate clinical course descriptions (phenotypes) of multiple sclerosis (MS) are important for communication, prognostication, design and recruitment of clinical trials, and treatment decision-making. Standardized descriptions published in 1996 based on a survey of international MS experts provided purely clinical phenotypes based on data and consensus at that time, but imaging and biological correlates were lacking. Increased understanding of MS and its pathology, coupled with general concern that the original descriptors may not adequately reflect more recently identified clinical aspects of the disease, prompted a re-examination of MS disease phenotypes by the International Advisory Committee on Clinical Trials of MS. While imaging and biological markers that might provide objective criteria for separating clinical phenotypes are lacking, we propose refined descriptors that include consideration of disease activity (based on clinical relapse rate and imaging findings) and disease progression. Strategies for future research to better define phenotypes are also outlined. PMID:24871874

  1. Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium.

    PubMed

    Mitry, Danny; Peto, Tunde; Hayat, Shabina; Morgan, James E; Khaw, Kay-Tee; Foster, Paul J

    2013-01-01

    Crowdsourcing is the process of outsourcing numerous tasks to many untrained individuals. Our aim was to assess the performance and repeatability of crowdsourcing for the classification of retinal fundus photography. One hundred retinal fundus photograph images with pre-determined disease criteria were selected by experts from a large cohort study. After reading brief instructions and an example classification, we requested that knowledge workers (KWs) from a crowdsourcing platform classified each image as normal or abnormal with grades of severity. Each image was classified 20 times by different KWs. Four study designs were examined to assess the effect of varying incentive and KW experience in classification accuracy. All study designs were conducted twice to examine repeatability. Performance was assessed by comparing the sensitivity, specificity and area under the receiver operating characteristic curve (AUC). Without restriction on eligible participants, two thousand classifications of 100 images were received in under 24 hours at minimal cost. In trial 1 all study designs had an AUC (95%CI) of 0.701(0.680-0.721) or greater for classification of normal/abnormal. In trial 1, the highest AUC (95%CI) for normal/abnormal classification was 0.757 (0.738-0.776) for KWs with moderate experience. Comparable results were observed in trial 2. In trial 1, between 64-86% of any abnormal image was correctly classified by over half of all KWs. In trial 2, this ranged between 74-97%. Sensitivity was ≥ 96% for normal versus severely abnormal detections across all trials. Sensitivity for normal versus mildly abnormal varied between 61-79% across trials. With minimal training, crowdsourcing represents an accurate, rapid and cost-effective method of retinal image analysis which demonstrates good repeatability. Larger studies with more comprehensive participant training are needed to explore the utility of this compelling technique in large scale medical image analysis.

  2. Improved opponent color local binary patterns: an effective local image descriptor for color texture classification

    NASA Astrophysics Data System (ADS)

    Bianconi, Francesco; Bello-Cerezo, Raquel; Napoletano, Paolo

    2018-01-01

    Texture classification plays a major role in many computer vision applications. Local binary patterns (LBP) encoding schemes have largely been proven to be very effective for this task. Improved LBP (ILBP) are conceptually simple, easy to implement, and highly effective LBP variants based on a point-to-average thresholding scheme instead of a point-to-point one. We propose the use of this encoding scheme for extracting intra- and interchannel features for color texture classification. We experimentally evaluated the resulting improved opponent color LBP alone and in concatenation with the ILBP of the local color contrast map on a set of image classification tasks over 9 datasets of generic color textures and 11 datasets of biomedical textures. The proposed approach outperformed other grayscale and color LBP variants in nearly all the datasets considered and proved competitive even against image features from last generation convolutional neural networks, particularly for the classification of biomedical images.

  3. Quantum Cascade Laser-Based Infrared Microscopy for Label-Free and Automated Cancer Classification in Tissue Sections.

    PubMed

    Kuepper, Claus; Kallenbach-Thieltges, Angela; Juette, Hendrik; Tannapfel, Andrea; Großerueschkamp, Frederik; Gerwert, Klaus

    2018-05-16

    A feasibility study using a quantum cascade laser-based infrared microscope for the rapid and label-free classification of colorectal cancer tissues is presented. Infrared imaging is a reliable, robust, automated, and operator-independent tissue classification method that has been used for differential classification of tissue thin sections identifying tumorous regions. However, long acquisition time by the so far used FT-IR-based microscopes hampered the clinical translation of this technique. Here, the used quantum cascade laser-based microscope provides now infrared images for precise tissue classification within few minutes. We analyzed 110 patients with UICC-Stage II and III colorectal cancer, showing 96% sensitivity and 100% specificity of this label-free method as compared to histopathology, the gold standard in routine clinical diagnostics. The main hurdle for the clinical translation of IR-Imaging is overcome now by the short acquisition time for high quality diagnostic images, which is in the same time range as frozen sections by pathologists.

  4. Adipose Tissue Quantification by Imaging Methods: A Proposed Classification

    PubMed Central

    Shen, Wei; Wang, ZiMian; Punyanita, Mark; Lei, Jianbo; Sinav, Ahmet; Kral, John G.; Imielinska, Celina; Ross, Robert; Heymsfield, Steven B.

    2007-01-01

    Recent advances in imaging techniques and understanding of differences in the molecular biology of adipose tissue has rendered classical anatomy obsolete, requiring a new classification of the topography of adipose tissue. Adipose tissue is one of the largest body compartments, yet a classification that defines specific adipose tissue depots based on their anatomic location and related functions is lacking. The absence of an accepted taxonomy poses problems for investigators studying adipose tissue topography and its functional correlates. The aim of this review was to critically examine the literature on imaging of whole body and regional adipose tissue and to create the first systematic classification of adipose tissue topography. Adipose tissue terminology was examined in over 100 original publications. Our analysis revealed inconsistencies in the use of specific definitions, especially for the compartment termed “visceral” adipose tissue. This analysis leads us to propose an updated classification of total body and regional adipose tissue, providing a well-defined basis for correlating imaging studies of specific adipose tissue depots with molecular processes. PMID:12529479

  5. A Locality-Constrained and Label Embedding Dictionary Learning Algorithm for Image Classification.

    PubMed

    Zhengming Li; Zhihui Lai; Yong Xu; Jian Yang; Zhang, David

    2017-02-01

    Locality and label information of training samples play an important role in image classification. However, previous dictionary learning algorithms do not take the locality and label information of atoms into account together in the learning process, and thus their performance is limited. In this paper, a discriminative dictionary learning algorithm, called the locality-constrained and label embedding dictionary learning (LCLE-DL) algorithm, was proposed for image classification. First, the locality information was preserved using the graph Laplacian matrix of the learned dictionary instead of the conventional one derived from the training samples. Then, the label embedding term was constructed using the label information of atoms instead of the classification error term, which contained discriminating information of the learned dictionary. The optimal coding coefficients derived by the locality-based and label-based reconstruction were effective for image classification. Experimental results demonstrated that the LCLE-DL algorithm can achieve better performance than some state-of-the-art algorithms.

  6. [Evaluation of traditional pathological classification at molecular classification era for gastric cancer].

    PubMed

    Yu, Yingyan

    2014-01-01

    Histopathological classification is in a pivotal position in both basic research and clinical diagnosis and treatment of gastric cancer. Currently, there are different classification systems in basic science and clinical application. In medical literatures, different classifications are used including Lauren and WHO systems, which have confused many researchers. Lauren classification has been proposed for half a century, but is still used worldwide. It shows many advantages of simple, easy handling with prognostic significance. The WHO classification scheme is better than Lauren classification in that it is continuously being revised according to the progress of gastric cancer, and is always used in the clinical and pathological diagnosis of common scenarios. Along with the progression of genomics, transcriptomics, proteomics, metabolomics researches, molecular classification of gastric cancer becomes the current hot topics. The traditional therapeutic approach based on phenotypic characteristics of gastric cancer will most likely be replaced with a gene variation mode. The gene-targeted therapy against the same molecular variation seems more reasonable than traditional chemical treatment based on the same morphological change.

  7. Two distinct symptom-based phenotypes of depression in epilepsy yield specific clinical and etiological insights.

    PubMed

    Rayner, Genevieve; Jackson, Graeme D; Wilson, Sarah J

    2016-11-01

    Depression is common but underdiagnosed in epilepsy. A quarter of patients meet criteria for a depressive disorder, yet few receive active treatment. We hypothesize that the presentation of depression is less recognizable in epilepsy because the symptoms are heterogeneous and often incorrectly attributed to the secondary effects of seizures or medication. Extending the ILAE's new phenomenological approach to classification of the epilepsies to include psychiatric comorbidity, we use data-driven profiling of the symptoms of depression to perform a preliminary investigation of whether there is a distinctive symptom-based phenotype of depression in epilepsy that could facilitate its recognition in the neurology clinic. The psychiatric and neuropsychological functioning of 91 patients with focal epilepsy was compared with that of 77 healthy controls (N=168). Cluster analysis of current depressive symptoms identified three clusters: one comprising nondepressed patients and two symptom-based phenotypes of depression. The 'Cognitive' phenotype (base rate=17%) was characterized by symptoms taking the form of self-critical cognitions and dysphoria and was accompanied by pervasive memory deficits. The 'Somatic' phenotype (7%) was characterized by vegetative depressive symptoms and anhedonia and was accompanied by greater anxiety. It is hoped that identification of the features of these two phenotypes will ultimately facilitate improved detection and diagnosis of depression in patients with epilepsy and thereby lead to appropriate and timely treatment, to the benefit of patient wellbeing and the potential efficacy of treatment of the seizure disorder. This article is part of a Special Issue entitled "The new approach to classification: Rethinking cognition and behavior in epilepsy". Copyright © 2016 Elsevier Inc. All rights reserved.

  8. An approach to understanding sleep and depressed mood in adolescents: person-centred sleep classification.

    PubMed

    Shochat, Tamar; Barker, David H; Sharkey, Katherine M; Van Reen, Eliza; Roane, Brandy M; Carskadon, Mary A

    2017-12-01

    Depressive mood in youth has been associated with distinct sleep dimensions, such as timing, duration and quality. To identify discrete sleep phenotypes, we applied person-centred analysis (latent class mixture models) based on self-reported sleep patterns and quality, and examined associations between phenotypes and mood in high-school seniors. Students (n = 1451; mean age = 18.4 ± 0.3 years; 648 M) completed a survey near the end of high-school. Indicators used for classification included school night bed- and rise-times, differences between non-school night and school night bed- and rise-times, sleep-onset latency, number of awakenings, naps, and sleep quality and disturbance. Mood was measured using the total score on the Center for Epidemiologic Studies-Depression Scale. One-way anova tested differences between phenotype for mood. Fit indexes were split between 3-, 4- and 5-phenotype solutions. For all solutions, between phenotype differences were shown for all indicators: bedtime showed the largest difference; thus, classes were labelled from earliest to latest bedtime as 'A' (n = 751), 'B' (n = 428) and 'C' (n = 272) in the 3-class solution. Class B showed the lowest sleep disturbances and remained stable, whereas classes C and A each split in the 4- and 5-class solutions, respectively. Associations with mood were consistent, albeit small, with class B showing the lowest scores. Person-centred analysis identified sleep phenotypes that differed in mood, such that those with the fewest depressive symptoms had moderate sleep timing, shorter sleep-onset latencies and fewer arousals. Sleep characteristics in these groups may add to our understanding of how sleep and depressed mood associate in teens. © 2017 European Sleep Research Society.

  9. Label-free identification of macrophage phenotype by fluorescence lifetime imaging microscopy

    NASA Astrophysics Data System (ADS)

    Alfonso-García, Alba; Smith, Tim D.; Datta, Rupsa; Luu, Thuy U.; Gratton, Enrico; Potma, Eric O.; Liu, Wendy F.

    2016-04-01

    Macrophages adopt a variety of phenotypes that are a reflection of the many functions they perform as part of the immune system. In particular, metabolism is a phenotypic trait that differs between classically activated, proinflammatory macrophages, and alternatively activated, prohealing macrophages. Inflammatory macrophages have a metabolism based on glycolysis while alternatively activated macrophages generally rely on oxidative phosphorylation to generate chemical energy. We employ this shift in metabolism as an endogenous marker to identify the phenotype of individual macrophages via live-cell fluorescence lifetime imaging microscopy (FLIM). We demonstrate that polarized macrophages can be readily discriminated with the aid of a phasor approach to FLIM, which provides a fast and model-free method for analyzing fluorescence lifetime images.

  10. An image analysis toolbox for high-throughput C. elegans assays

    PubMed Central

    Wählby, Carolina; Kamentsky, Lee; Liu, Zihan H.; Riklin-Raviv, Tammy; Conery, Annie L.; O’Rourke, Eyleen J.; Sokolnicki, Katherine L.; Visvikis, Orane; Ljosa, Vebjorn; Irazoqui, Javier E.; Golland, Polina; Ruvkun, Gary; Ausubel, Frederick M.; Carpenter, Anne E.

    2012-01-01

    We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available via the open-source CellProfiler project and enables objective scoring of whole-animal high-throughput image-based assays of C. elegans for the study of diverse biological pathways relevant to human disease. PMID:22522656

  11. Classification and overview of research in real-time imaging

    NASA Astrophysics Data System (ADS)

    Sinha, Purnendu; Gorinsky, Sergey V.; Laplante, Phillip A.; Stoyenko, Alexander D.; Marlowe, Thomas J.

    1996-10-01

    Real-time imaging has application in areas such as multimedia, virtual reality, medical imaging, and remote sensing and control. Recently, the imaging community has witnessed a tremendous growth in research and new ideas in these areas. To lend structure to this growth, we outline a classification scheme and provide an overview of current research in real-time imaging. For convenience, we have categorized references by research area and application.

  12. Pulsed terahertz imaging of breast cancer in freshly excised murine tumors

    NASA Astrophysics Data System (ADS)

    Bowman, Tyler; Chavez, Tanny; Khan, Kamrul; Wu, Jingxian; Chakraborty, Avishek; Rajaram, Narasimhan; Bailey, Keith; El-Shenawee, Magda

    2018-02-01

    This paper investigates terahertz (THz) imaging and classification of freshly excised murine xenograft breast cancer tumors. These tumors are grown via injection of E0771 breast adenocarcinoma cells into the flank of mice maintained on high-fat diet. Within 1 h of excision, the tumor and adjacent tissues are imaged using a pulsed THz system in the reflection mode. The THz images are classified using a statistical Bayesian mixture model with unsupervised and supervised approaches. Correlation with digitized pathology images is conducted using classification images assigned by a modal class decision rule. The corresponding receiver operating characteristic curves are obtained based on the classification results. A total of 13 tumor samples obtained from 9 tumors are investigated. The results show good correlation of THz images with pathology results in all samples of cancer and fat tissues. For tumor samples of cancer, fat, and muscle tissues, THz images show reasonable correlation with pathology where the primary challenge lies in the overlapping dielectric properties of cancer and muscle tissues. The use of a supervised regression approach shows improvement in the classification images although not consistently in all tissue regions. Advancing THz imaging of breast tumors from mice and the development of accurate statistical models will ultimately progress the technique for the assessment of human breast tumor margins.

  13. Classification of Urban Feature from Unmanned Aerial Vehicle Images Using Gasvm Integration and Multi-Scale Segmentation

    NASA Astrophysics Data System (ADS)

    Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.

    2015-12-01

    The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.

  14. Computer-based classification of bacteria species by analysis of their colonies Fresnel diffraction patterns

    NASA Astrophysics Data System (ADS)

    Suchwalko, Agnieszka; Buzalewicz, Igor; Podbielska, Halina

    2012-01-01

    In the presented paper the optical system with converging spherical wave illumination for classification of bacteria species, is proposed. It allows for compression of the observation space, observation of Fresnel patterns, diffraction pattern scaling and low level of optical aberrations, which are not possessed by other optical configurations. Obtained experimental results have shown that colonies of specific bacteria species generate unique diffraction signatures. Analysis of Fresnel diffraction patterns of bacteria colonies can be fast and reliable method for classification and recognition of bacteria species. To determine the unique features of bacteria colonies diffraction patterns the image processing analysis was proposed. Classification can be performed by analyzing the spatial structure of diffraction patterns, which can be characterized by set of concentric rings. The characteristics of such rings depends on the bacteria species. In the paper, the influence of basic features and ring partitioning number on the bacteria classification, is analyzed. It is demonstrated that Fresnel patterns can be used for classification of following species: Salmonella enteritidis, Staplyococcus aureus, Proteus mirabilis and Citrobacter freundii. Image processing is performed by free ImageJ software, for which a special macro with human interaction, was written. LDA classification, CV method, ANOVA and PCA visualizations preceded by image data extraction were conducted using the free software R.

  15. A Molecular Perspective on Systematics, Taxonomy and Classification Amazonian Discus Fishes of the Genus Symphysodon

    PubMed Central

    Amado, Manuella Villar; Farias, Izeni P.; Hrbek, Tomas

    2011-01-01

    With the goal of contributing to the taxonomy and systematics of the Neotropical cichlid fishes of the genus Symphysodon, we analyzed 336 individuals from 24 localities throughout the entire distributional range of the genus. We analyzed variation at 13 nuclear microsatellite markers, and subjected the data to Bayesian analysis of genetic structure. The results indicate that Symphysodon is composed of four genetic groups: group PURPLE—phenotype Heckel and abacaxi; group GREEN—phenotype green; group RED—phenotype blue and brown; and group PINK—populations of Xingú and Cametá. Although the phenotypes blue and brown are predominantly biological group RED, they also have substantial contributions from other biological groups, and the patterns of admixture of the two phenotypes are different. The two phenotypes are further characterized by distinct and divergent mtDNA haplotype groups, and show differences in mean habitat use measured as pH and conductivity. Differences in mean habitat use is also observed between most other biological groups. We therefore conclude that Symphysodon comprises five evolutionary significant units: Symphysodon discus (Heckel and abacaxi phenotypes), S. aequifasciatus (brown phenotype), S. tarzoo (green phenotype), Symphysodon sp. 1 (blue phenotype) and Symphysodon sp. 2 (Xingú group). PMID:21811676

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth R.; Hodas, Nathan O.; Baker, Nathan A.

    Forensic analysis of nanoparticles is often conducted through the collection and identifi- cation of electron microscopy images to determine the origin of suspected nuclear material. Each image is carefully studied by experts for classification of materials based on texture, shape, and size. Manually inspecting large image datasets takes enormous amounts of time. However, automatic classification of large image datasets is a challenging problem due to the complexity involved in choosing image features, the lack of training data available for effective machine learning methods, and the availability of user interfaces to parse through images. Therefore, a significant need exists for automatedmore » and semi-automated methods to help analysts perform accurate image classification in large image datasets. We present INStINCt, our Intelligent Signature Canvas, as a framework for quickly organizing image data in a web based canvas framework. Images are partitioned using small sets of example images, chosen by users, and presented in an optimal layout based on features derived from convolutional neural networks.« less

  17. Molecular approaches for classifying endometrial carcinoma.

    PubMed

    Piulats, Josep M; Guerra, Esther; Gil-Martín, Marta; Roman-Canal, Berta; Gatius, Sonia; Sanz-Pamplona, Rebeca; Velasco, Ana; Vidal, August; Matias-Guiu, Xavier

    2017-04-01

    Endometrial carcinoma is the most common cancer of the female genital tract. This review article discusses the usefulness of molecular techniques to classify endometrial carcinoma. Any proposal for molecular classification of neoplasms should integrate morphological features of the tumors. For that reason, we start with the current histological classification of endometrial carcinoma, by discussing the correlation between genotype and phenotype, and the most significant recent improvements. Then, we comment on some of the possible flaws of this classification, by discussing also the value of molecular pathology in improving them, including interobserver variation in pathologic interpretation of high grade tumors. Third, we discuss the importance of applying TCGA molecular approach to clinical practice. We also comment on the impact of intratumor heterogeneity in classification, and finally, we will discuss briefly, the usefulness of TCGA classification in tailoring immunotherapy in endometrial cancer patients. We suggest combining pathologic classification and the surrogate TCGA molecular classification for high-grade endometrial carcinomas, as an option to improve assessment of prognosis. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  19. Brain tumour classification and abnormality detection using neuro-fuzzy technique and Otsu thresholding.

    PubMed

    Renjith, Arokia; Manjula, P; Mohan Kumar, P

    2015-01-01

    Brain tumour is one of the main causes for an increase in transience among children and adults. This paper proposes an improved method based on Magnetic Resonance Imaging (MRI) brain image classification and image segmentation approach. Automated classification is encouraged by the need of high accuracy when dealing with a human life. The detection of the brain tumour is a challenging problem, due to high diversity in tumour appearance and ambiguous tumour boundaries. MRI images are chosen for detection of brain tumours, as they are used in soft tissue determinations. First of all, image pre-processing is used to enhance the image quality. Second, dual-tree complex wavelet transform multi-scale decomposition is used to analyse texture of an image. Feature extraction extracts features from an image using gray-level co-occurrence matrix (GLCM). Then, the Neuro-Fuzzy technique is used to classify the stages of brain tumour as benign, malignant or normal based on texture features. Finally, tumour location is detected using Otsu thresholding. The classifier performance is evaluated based on classification accuracies. The simulated results show that the proposed classifier provides better accuracy than previous method.

  20. Semi-supervised manifold learning with affinity regularization for Alzheimer's disease identification using positron emission tomography imaging.

    PubMed

    Lu, Shen; Xia, Yong; Cai, Tom Weidong; Feng, David Dagan

    2015-01-01

    Dementia, Alzheimer's disease (AD) in particular is a global problem and big threat to the aging population. An image based computer-aided dementia diagnosis method is needed to providing doctors help during medical image examination. Many machine learning based dementia classification methods using medical imaging have been proposed and most of them achieve accurate results. However, most of these methods make use of supervised learning requiring fully labeled image dataset, which usually is not practical in real clinical environment. Using large amount of unlabeled images can improve the dementia classification performance. In this study we propose a new semi-supervised dementia classification method based on random manifold learning with affinity regularization. Three groups of spatial features are extracted from positron emission tomography (PET) images to construct an unsupervised random forest which is then used to regularize the manifold learning objective function. The proposed method, stat-of-the-art Laplacian support vector machine (LapSVM) and supervised SVM are applied to classify AD and normal controls (NC). The experiment results show that learning with unlabeled images indeed improves the classification performance. And our method outperforms LapSVM on the same dataset.

  1. Evaluation of Alzheimer's disease by analysis of MR images using Objective Dialectical Classifiers as an alternative to ADC maps.

    PubMed

    Dos Santos, Wellington P; de Assis, Francisco M; de Souza, Ricardo E; Dos Santos Filho, Plinio B

    2008-01-01

    Alzheimer's disease is the most common cause of dementia, yet hard to diagnose precisely without invasive techniques, particularly at the onset of the disease. This work approaches image analysis and classification of synthetic multispectral images composed by diffusion-weighted (DW) magnetic resonance (MR) cerebral images for the evaluation of cerebrospinal fluid area and measuring the advance of Alzheimer's disease. A clinical 1.5 T MR imaging system was used to acquire all images presented. The classification methods are based on Objective Dialectical Classifiers, a new method based on Dialectics as defined in the Philosophy of Praxis. A 2-degree polynomial network with supervised training is used to generate the ground truth image. The classification results are used to improve the usual analysis of the apparent diffusion coefficient map.

  2. Object-oriented recognition of high-resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan

    2016-01-01

    With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .

  3. Objective breast tissue image classification using Quantitative Transmission ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Malik, Bilal; Klock, John; Wiskin, James; Lenox, Mark

    2016-12-01

    Quantitative Transmission Ultrasound (QT) is a powerful and emerging imaging paradigm which has the potential to perform true three-dimensional image reconstruction of biological tissue. Breast imaging is an important application of QT and allows non-invasive, non-ionizing imaging of whole breasts in vivo. Here, we report the first demonstration of breast tissue image classification in QT imaging. We systematically assess the ability of the QT images’ features to differentiate between normal breast tissue types. The three QT features were used in Support Vector Machines (SVM) classifiers, and classification of breast tissue as either skin, fat, glands, ducts or connective tissue was demonstrated with an overall accuracy of greater than 90%. Finally, the classifier was validated on whole breast image volumes to provide a color-coded breast tissue volume. This study serves as a first step towards a computer-aided detection/diagnosis platform for QT.

  4. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  5. The fragmented nature of tundra landscape

    NASA Astrophysics Data System (ADS)

    Virtanen, Tarmo; Ek, Malin

    2014-04-01

    The vegetation and land cover structure of tundra areas is fragmented when compared to other biomes. Thus, satellite images of high resolution are required for producing land cover classifications, in order to reveal the actual distribution of land cover types across these large and remote areas. We produced and compared different land cover classifications using three satellite images (QuickBird, Aster and Landsat TM5) with different pixel sizes (2.4 m, 15 m and 30 m pixel size, respectively). The study area, in north-eastern European Russia, was visited in July 2007 to obtain ground reference data. The QuickBird image was classified using supervised segmentation techniques, while the Aster and Landsat TM5 images were classified using a pixel-based supervised classification method. The QuickBird classification showed the highest accuracy when tested against field data, while the Aster image was generally more problematic to classify than the Landsat TM5 image. Use of smaller pixel sized images distinguished much greater levels of landscape fragmentation. The overall mean patch sizes in the QuickBird, Aster, and Landsat TM5-classifications were 871 m2, 2141 m2 and 7433 m2, respectively. In the QuickBird classification, the mean patch size of all the tundra and peatland vegetation classes was smaller than one pixel of the Landsat TM5 image. Water bodies and fens in particular occur in the landscape in small or elongated patches, and thus cannot be realistically classified from larger pixel sized images. Land cover patterns vary considerably at such a fine-scale, so that a lot of information is lost if only medium resolution satellite images are used. It is crucial to know the amount and spatial distribution of different vegetation types in arctic landscapes, as carbon dynamics and other climate related physical, geological and biological processes are known to vary greatly between vegetation types.

  6. Deep multi-scale convolutional neural network for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  7. Automatic classification for mammogram backgrounds based on bi-rads complexity definition and on a multi content analysis framework

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric

    2011-03-01

    Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.

  8. Use of an automatic procedure for determination of classes of land use in the Teste Araras area of the peripheral Paulist depression

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Valeriano, D. D.

    1981-01-01

    An evaluation of the multispectral image analyzer (system Image 1-100), using automatic classification, is presented. The region studied is situated. The automatic was carried out using the maximum likelihood (MAXVER) classification system. The following classes were established: urban area, bare soil, sugar cane, citrus culture (oranges), pastures, and reforestation. The classification matrix of the test sites indicate that the percentage of correct classification varied between 63% and 100%.

  9. Computer-aided Classification of Mammographic Masses Using Visually Sensitive Image Features

    PubMed Central

    Wang, Yunzhi; Aghaei, Faranak; Zarafshani, Ali; Qiu, Yuchen; Qian, Wei; Zheng, Bin

    2017-01-01

    Purpose To develop a new computer-aided diagnosis (CAD) scheme that computes visually sensitive image features routinely used by radiologists to develop a machine learning classifier and distinguish between the malignant and benign breast masses detected from digital mammograms. Methods An image dataset including 301 breast masses was retrospectively selected. From each segmented mass region, we computed image features that mimic five categories of visually sensitive features routinely used by radiologists in reading mammograms. We then selected five optimal features in the five feature categories and applied logistic regression models for classification. A new CAD interface was also designed to show lesion segmentation, computed feature values and classification score. Results Areas under ROC curves (AUC) were 0.786±0.026 and 0.758±0.027 when to classify mass regions depicting on two view images, respectively. By fusing classification scores computed from two regions, AUC increased to 0.806±0.025. Conclusion This study demonstrated a new approach to develop CAD scheme based on 5 visually sensitive image features. Combining with a “visual aid” interface, CAD results may be much more easily explainable to the observers and increase their confidence to consider CAD generated classification results than using other conventional CAD approaches, which involve many complicated and visually insensitive texture features. PMID:27911353

  10. Integration of Network Biology and Imaging to Study Cancer Phenotypes and Responses.

    PubMed

    Tian, Ye; Wang, Sean S; Zhang, Zhen; Rodriguez, Olga C; Petricoin, Emanuel; Shih, Ie-Ming; Chan, Daniel; Avantaggiati, Maria; Yu, Guoqiang; Ye, Shaozhen; Clarke, Robert; Wang, Chao; Zhang, Bai; Wang, Yue; Albanese, Chris

    2014-01-01

    Ever growing "omics" data and continuously accumulated biological knowledge provide an unprecedented opportunity to identify molecular biomarkers and their interactions that are responsible for cancer phenotypes that can be accurately defined by clinical measurements such as in vivo imaging. Since signaling or regulatory networks are dynamic and context-specific, systematic efforts to characterize such structural alterations must effectively distinguish significant network rewiring from random background fluctuations. Here we introduced a novel integration of network biology and imaging to study cancer phenotypes and responses to treatments at the molecular systems level. Specifically, Differential Dependence Network (DDN) analysis was used to detect statistically significant topological rewiring in molecular networks between two phenotypic conditions, and in vivo Magnetic Resonance Imaging (MRI) was used to more accurately define phenotypic sample groups for such differential analysis. We applied DDN to analyze two distinct phenotypic groups of breast cancer and study how genomic instability affects the molecular network topologies in high-grade ovarian cancer. Further, FDA-approved arsenic trioxide (ATO) and the ND2-SmoA1 mouse model of Medulloblastoma (MB) were used to extend our analyses of combined MRI and Reverse Phase Protein Microarray (RPMA) data to assess tumor responses to ATO and to uncover the complexity of therapeutic molecular biology.

  11. Do pre-trained deep learning models improve computer-aided classification of digital mammograms?

    NASA Astrophysics Data System (ADS)

    Aboutalib, Sarah S.; Mohamed, Aly A.; Zuley, Margarita L.; Berg, Wendie A.; Luo, Yahong; Wu, Shandong

    2018-02-01

    Digital mammography screening is an important exam for the early detection of breast cancer and reduction in mortality. False positives leading to high recall rates, however, results in unnecessary negative consequences to patients and health care systems. In order to better aid radiologists, computer-aided tools can be utilized to improve distinction between image classifications and thus potentially reduce false recalls. The emergence of deep learning has shown promising results in the area of biomedical imaging data analysis. This study aimed to investigate deep learning and transfer learning methods that can improve digital mammography classification performance. In particular, we evaluated the effect of pre-training deep learning models with other imaging datasets in order to boost classification performance on a digital mammography dataset. Two types of datasets were used for pre-training: (1) a digitized film mammography dataset, and (2) a very large non-medical imaging dataset. By using either of these datasets to pre-train the network initially, and then fine-tuning with the digital mammography dataset, we found an increase in overall classification performance in comparison to a model without pre-training, with the very large non-medical dataset performing the best in improving the classification accuracy.

  12. Ground-based cloud classification by learning stable local binary patterns

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shi, Cunzhao; Wang, Chunheng; Xiao, Baihua

    2018-07-01

    Feature selection and extraction is the first step in implementing pattern classification. The same is true for ground-based cloud classification. Histogram features based on local binary patterns (LBPs) are widely used to classify texture images. However, the conventional uniform LBP approach cannot capture all the dominant patterns in cloud texture images, thereby resulting in low classification performance. In this study, a robust feature extraction method by learning stable LBPs is proposed based on the averaged ranks of the occurrence frequencies of all rotation invariant patterns defined in the LBPs of cloud images. The proposed method is validated with a ground-based cloud classification database comprising five cloud types. Experimental results demonstrate that the proposed method achieves significantly higher classification accuracy than the uniform LBP, local texture patterns (LTP), dominant LBP (DLBP), completed LBP (CLTP) and salient LBP (SaLBP) methods in this cloud image database and under different noise conditions. And the performance of the proposed method is comparable with that of the popular deep convolutional neural network (DCNN) method, but with less computation complexity. Furthermore, the proposed method also achieves superior performance on an independent test data set.

  13. A matter of reading English

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, M.M. Jr.

    1996-06-14

    This letter is a reply to another critical letter regarding the classification of a hereditary disease known as craniosynostosis. The phenotype of the patient as well as the cytogenetic information should be taken into consideration. 8 refs.

  14. A support vector machine classifier reduces interscanner variation in the HRCT classification of regional disease pattern in diffuse lung disease: Comparison to a Bayesian classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Yongjun; Lim, Jonghyuck; Kim, Namkug

    2013-05-15

    Purpose: To investigate the effect of using different computed tomography (CT) scanners on the accuracy of high-resolution CT (HRCT) images in classifying regional disease patterns in patients with diffuse lung disease, support vector machine (SVM) and Bayesian classifiers were applied to multicenter data. Methods: Two experienced radiologists marked sets of 600 rectangular 20 Multiplication-Sign 20 pixel regions of interest (ROIs) on HRCT images obtained from two scanners (GE and Siemens), including 100 ROIs for each of local patterns of lungs-normal lung and five of regional pulmonary disease patterns (ground-glass opacity, reticular opacity, honeycombing, emphysema, and consolidation). Each ROI was assessedmore » using 22 quantitative features belonging to one of the following descriptors: histogram, gradient, run-length, gray level co-occurrence matrix, low-attenuation area cluster, and top-hat transform. For automatic classification, a Bayesian classifier and a SVM classifier were compared under three different conditions. First, classification accuracies were estimated using data from each scanner. Next, data from the GE and Siemens scanners were used for training and testing, respectively, and vice versa. Finally, all ROI data were integrated regardless of the scanner type and were then trained and tested together. All experiments were performed based on forward feature selection and fivefold cross-validation with 20 repetitions. Results: For each scanner, better classification accuracies were achieved with the SVM classifier than the Bayesian classifier (92% and 82%, respectively, for the GE scanner; and 92% and 86%, respectively, for the Siemens scanner). The classification accuracies were 82%/72% for training with GE data and testing with Siemens data, and 79%/72% for the reverse. The use of training and test data obtained from the HRCT images of different scanners lowered the classification accuracy compared to the use of HRCT images from the same scanner. For integrated ROI data obtained from both scanners, the classification accuracies with the SVM and Bayesian classifiers were 92% and 77%, respectively. The selected features resulting from the classification process differed by scanner, with more features included for the classification of the integrated HRCT data than for the classification of the HRCT data from each scanner. For the integrated data, consisting of HRCT images of both scanners, the classification accuracy based on the SVM was statistically similar to the accuracy of the data obtained from each scanner. However, the classification accuracy of the integrated data using the Bayesian classifier was significantly lower than the classification accuracy of the ROI data of each scanner. Conclusions: The use of an integrated dataset along with a SVM classifier rather than a Bayesian classifier has benefits in terms of the classification accuracy of HRCT images acquired with more than one scanner. This finding is of relevance in studies involving large number of images, as is the case in a multicenter trial with different scanners.« less

  15. Prostate segmentation by sparse representation based classification

    PubMed Central

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-01-01

    Purpose: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. Methods: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. Results: The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. Conclusions: The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation. PMID:23039673

  16. Prostate segmentation by sparse representation based classification.

    PubMed

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-10-01

    The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation.

  17. Identifying Multimodal Intermediate Phenotypes between Genetic Risk Factors and Disease Status in Alzheimer’s Disease

    PubMed Central

    Hao, Xiaoke; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L.; Saykin, Andrew J.; Zhang, Daoqiang; Shen, Li

    2016-01-01

    Neuroimaging genetics has attracted growing attention and interest, which is thought to be a powerful strategy to examine the influence of genetic variants (i.e., single nucleotide polymorphisms (SNPs)) on structures or functions of human brain. In recent studies, univariate or multivariate regression analysis methods are typically used to capture the effective associations between genetic variants and quantitative traits (QTs) such as brain imaging phenotypes. The identified imaging QTs, although associated with certain genetic markers, may not be all disease specific. A useful, but underexplored, scenario could be to discover only those QTs associated with both genetic markers and disease status for revealing the chain from genotype to phenotype to symptom. In addition, multimodal brain imaging phenotypes are extracted from different perspectives and imaging markers consistently showing up in multimodalities may provide more insights for mechanistic understanding of diseases (i.e., Alzheimer’s disease (AD)). In this work, we propose a general framework to exploit multi-modal brain imaging phenotypes as intermediate traits that bridge genetic risk factors and multi-class disease status. We applied our proposed method to explore the relation between the well-known AD risk SNP APOE rs429358 and three baseline brain imaging modalities (i.e., structural magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET) and F-18 florbetapir PET scans amyloid imaging (AV45)) from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The empirical results demonstrate that our proposed method not only helps improve the performances of imaging genetic associations, but also discovers robust and consistent regions of interests (ROIs) across multi-modalities to guide the disease-induced interpretation. PMID:27277494

  18. Rule-based land use/land cover classification in coastal areas using seasonal remote sensing imagery: a case study from Lianyungang City, China.

    PubMed

    Yang, Xiaoyan; Chen, Longgao; Li, Yingkui; Xi, Wenjia; Chen, Longqian

    2015-07-01

    Land use/land cover (LULC) inventory provides an important dataset in regional planning and environmental assessment. To efficiently obtain the LULC inventory, we compared the LULC classifications based on single satellite imagery with a rule-based classification based on multi-seasonal imagery in Lianyungang City, a coastal city in China, using CBERS-02 (the 2nd China-Brazil Environmental Resource Satellites) images. The overall accuracies of the classification based on single imagery are 78.9, 82.8, and 82.0% in winter, early summer, and autumn, respectively. The rule-based classification improves the accuracy to 87.9% (kappa 0.85), suggesting that combining multi-seasonal images can considerably improve the classification accuracy over any single image-based classification. This method could also be used to analyze seasonal changes of LULC types, especially for those associated with tidal changes in coastal areas. The distribution and inventory of LULC types with an overall accuracy of 87.9% and a spatial resolution of 19.5 m can assist regional planning and environmental assessment efficiently in Lianyungang City. This rule-based classification provides a guidance to improve accuracy for coastal areas with distinct LULC temporal spectral features.

  19. Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification

    NASA Astrophysics Data System (ADS)

    Sharif, I.; Khare, S.

    2014-11-01

    With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.

  20. Evaluation of Classifier Performance for Multiclass Phenotype Discrimination in Untargeted Metabolomics.

    PubMed

    Trainor, Patrick J; DeFilippis, Andrew P; Rai, Shesh N

    2017-06-21

    Statistical classification is a critical component of utilizing metabolomics data for examining the molecular determinants of phenotypes. Despite this, a comprehensive and rigorous evaluation of the accuracy of classification techniques for phenotype discrimination given metabolomics data has not been conducted. We conducted such an evaluation using both simulated and real metabolomics datasets, comparing Partial Least Squares-Discriminant Analysis (PLS-DA), Sparse PLS-DA, Random Forests, Support Vector Machines (SVM), Artificial Neural Network, k -Nearest Neighbors ( k -NN), and Naïve Bayes classification techniques for discrimination. We evaluated the techniques on simulated data generated to mimic global untargeted metabolomics data by incorporating realistic block-wise correlation and partial correlation structures for mimicking the correlations and metabolite clustering generated by biological processes. Over the simulation studies, covariance structures, means, and effect sizes were stochastically varied to provide consistent estimates of classifier performance over a wide range of possible scenarios. The effects of the presence of non-normal error distributions, the introduction of biological and technical outliers, unbalanced phenotype allocation, missing values due to abundances below a limit of detection, and the effect of prior-significance filtering (dimension reduction) were evaluated via simulation. In each simulation, classifier parameters, such as the number of hidden nodes in a Neural Network, were optimized by cross-validation to minimize the probability of detecting spurious results due to poorly tuned classifiers. Classifier performance was then evaluated using real metabolomics datasets of varying sample medium, sample size, and experimental design. We report that in the most realistic simulation studies that incorporated non-normal error distributions, unbalanced phenotype allocation, outliers, missing values, and dimension reduction, classifier performance (least to greatest error) was ranked as follows: SVM, Random Forest, Naïve Bayes, sPLS-DA, Neural Networks, PLS-DA and k -NN classifiers. When non-normal error distributions were introduced, the performance of PLS-DA and k -NN classifiers deteriorated further relative to the remaining techniques. Over the real datasets, a trend of better performance of SVM and Random Forest classifier performance was observed.

  1. Application of the SNoW machine learning paradigm to a set of transportation imaging problems

    NASA Astrophysics Data System (ADS)

    Paul, Peter; Burry, Aaron M.; Wang, Yuheng; Kozitsky, Vladimir

    2012-01-01

    Machine learning methods have been successfully applied to image object classification problems where there is clear distinction between classes and where a comprehensive set of training samples and ground truth are readily available. The transportation domain is an area where machine learning methods are particularly applicable, since the classification problems typically have well defined class boundaries and, due to high traffic volumes in most applications, massive roadway data is available. Though these classes tend to be well defined, the particular image noises and variations can be challenging. Another challenge is the extremely high accuracy typically required in most traffic applications. Incorrect assignment of fines or tolls due to imaging mistakes is not acceptable in most applications. For the front seat vehicle occupancy detection problem, classification amounts to determining whether one face (driver only) or two faces (driver + passenger) are detected in the front seat of a vehicle on a roadway. For automatic license plate recognition, the classification problem is a type of optical character recognition problem encompassing multiple class classification. The SNoW machine learning classifier using local SMQT features is shown to be successful in these two transportation imaging applications.

  2. Nocardiopsis potens sp. nov., isolated from household waste.

    PubMed

    Yassin, A F; Spröer, C; Hupfer, H; Siering, C; Klenk, H-P

    2009-11-01

    The taxonomic position of an actinomycete, designated strain IMMIB L-21(T), was determined using a polyphasic taxonomic approach. The organism, which had phenotypic properties consistent with its classification in the genus Nocardiopsis, formed a distinct clade in the 16S rRNA gene sequence tree together with the type strain of Nocardiopsis composta, but was readily distinguished from this species using DNA-DNA relatedness and phenotypic data. The genotypic and phenotypic data show that the organism represents a novel species of the genus Nocardiopsis, for which the name Nocardiopsis potens sp. nov. is proposed. The type strain is IMMIB L-21(T) (=DSM 45234(T)=CCUG 56587(T)).

  3. Computer-aided interpretation approach for optical tomographic images

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, Jürgen; Hielscher, Andreas H.

    2010-11-01

    A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.

  4. Classification of fresh and frozen-thawed pork muscles using visible and near infrared hyperspectral imaging and textural analysis.

    PubMed

    Pu, Hongbin; Sun, Da-Wen; Ma, Ji; Cheng, Jun-Hu

    2015-01-01

    The potential of visible and near infrared hyperspectral imaging was investigated as a rapid and nondestructive technique for classifying fresh and frozen-thawed meats by integrating critical spectral and image features extracted from hyperspectral images in the region of 400-1000 nm. Six feature wavelengths (400, 446, 477, 516, 592 and 686 nm) were identified using uninformative variable elimination and successive projections algorithm. Image textural features of the principal component images from hyperspectral images were obtained using histogram statistics (HS), gray level co-occurrence matrix (GLCM) and gray level-gradient co-occurrence matrix (GLGCM). By these spectral and textural features, probabilistic neural network (PNN) models for classification of fresh and frozen-thawed pork meats were established. Compared with the models using the optimum wavelengths only, optimum wavelengths with HS image features, and optimum wavelengths with GLCM image features, the model integrating optimum wavelengths with GLGCM gave the highest classification rate of 93.14% and 90.91% for calibration and validation sets, respectively. Results indicated that the classification accuracy can be improved by combining spectral features with textural features and the fusion of critical spectral and textural features had better potential than single spectral extraction in classifying fresh and frozen-thawed pork meat. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Improving the mapping of crop types in the Midwestern U.S. by fusing Landsat and MODIS satellite data

    NASA Astrophysics Data System (ADS)

    Zhu, Likai; Radeloff, Volker C.; Ives, Anthony R.

    2017-06-01

    Mapping crop types is of great importance for assessing agricultural production, land-use patterns, and the environmental effects of agriculture. Indeed, both radiometric and spatial resolution of Landsat's sensors images are optimized for cropland monitoring. However, accurate mapping of crop types requires frequent cloud-free images during the growing season, which are often not available, and this raises the question of whether Landsat data can be combined with data from other satellites. Here, our goal is to evaluate to what degree fusing Landsat with MODIS Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) data can improve crop-type classification. Choosing either one or two images from all cloud-free Landsat observations available for the Arlington Agricultural Research Station area in Wisconsin from 2010 to 2014, we generated 87 combinations of images, and used each combination as input into the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to predict Landsat-like images at the nominal dates of each 8-day MODIS NBAR product. Both the original Landsat and STARFM-predicted images were then classified with a support vector machine (SVM), and we compared the classification errors of three scenarios: 1) classifying the one or two original Landsat images of each combination only, 2) classifying the one or two original Landsat images plus all STARFM-predicted images, and 3) classifying the one or two original Landsat images together with STARFM-predicted images for key dates. Our results indicated that using two Landsat images as the input of STARFM did not significantly improve the STARFM predictions compared to using only one, and predictions using Landsat images between July and August as input were most accurate. Including all STARFM-predicted images together with the Landsat images significantly increased average classification error by 4% points (from 21% to 25%) compared to using only Landsat images. However, incorporating only STARFM-predicted images for key dates decreased average classification error by 2% points (from 21% to 19%) compared to using only Landsat images. In particular, if only a single Landsat image was available, adding STARFM predictions for key dates significantly decreased the average classification error by 4 percentage points from 30% to 26% (p < 0.05). We conclude that adding STARFM-predicted images can be effective for improving crop-type classification when only limited Landsat observations are available, but carefully selecting images from a full set of STARFM predictions is crucial. We developed an approach to identify the optimal subsets of all STARFM predictions, which gives an alternative method of feature selection for future research.

  6. Phaedra, a protocol-driven system for analysis and validation of high-content imaging and flow cytometry.

    PubMed

    Cornelissen, Frans; Cik, Miroslav; Gustin, Emmanuel

    2012-04-01

    High-content screening has brought new dimensions to cellular assays by generating rich data sets that characterize cell populations in great detail and detect subtle phenotypes. To derive relevant, reliable conclusions from these complex data, it is crucial to have informatics tools supporting quality control, data reduction, and data mining. These tools must reconcile the complexity of advanced analysis methods with the user-friendliness demanded by the user community. After review of existing applications, we realized the possibility of adding innovative new analysis options. Phaedra was developed to support workflows for drug screening and target discovery, interact with several laboratory information management systems, and process data generated by a range of techniques including high-content imaging, multicolor flow cytometry, and traditional high-throughput screening assays. The application is modular and flexible, with an interface that can be tuned to specific user roles. It offers user-friendly data visualization and reduction tools for HCS but also integrates Matlab for custom image analysis and the Konstanz Information Miner (KNIME) framework for data mining. Phaedra features efficient JPEG2000 compression and full drill-down functionality from dose-response curves down to individual cells, with exclusion and annotation options, cell classification, statistical quality controls, and reporting.

  7. Determining the saliency of feature measurements obtained from images of sedimentary organic matter for use in its classification

    NASA Astrophysics Data System (ADS)

    Weller, Andrew F.; Harris, Anthony J.; Ware, J. Andrew; Jarvis, Paul S.

    2006-11-01

    The classification of sedimentary organic matter (OM) images can be improved by determining the saliency of image analysis (IA) features measured from them. Knowing the saliency of IA feature measurements means that only the most significant discriminating features need be used in the classification process. This is an important consideration for classification techniques such as artificial neural networks (ANNs), where too many features can lead to the 'curse of dimensionality'. The classification scheme adopted in this work is a hybrid of morphologically and texturally descriptive features from previous manual classification schemes. Some of these descriptive features are assigned to IA features, along with several others built into the IA software (Halcon) to ensure that a valid cross-section is available. After an image is captured and segmented, a total of 194 features are measured for each particle. To reduce this number to a more manageable magnitude, the SPSS AnswerTree Exhaustive CHAID (χ 2 automatic interaction detector) classification tree algorithm is used to establish each measurement's saliency as a classification discriminator. In the case of continuous data as used here, the F-test is used as opposed to the published algorithm. The F-test checks various statistical hypotheses about the variance of groups of IA feature measurements obtained from the particles to be classified. The aim is to reduce the number of features required to perform the classification without reducing its accuracy. In the best-case scenario, 194 inputs are reduced to 8, with a subsequent multi-layer back-propagation ANN recognition rate of 98.65%. This paper demonstrates the ability of the algorithm to reduce noise, help overcome the curse of dimensionality, and facilitate an understanding of the saliency of IA features as discriminators for sedimentary OM classification.

  8. Hyperspectral image segmentation using a cooperative nonparametric approach

    NASA Astrophysics Data System (ADS)

    Taher, Akar; Chehdi, Kacem; Cariou, Claude

    2013-10-01

    In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.

  9. Mapping Winter Wheat with Multi-Temporal SAR and Optical Images in an Urban Agricultural Region

    PubMed Central

    Zhou, Tao; Pan, Jianjun; Zhang, Peiyu; Wei, Shanbao; Han, Tao

    2017-01-01

    Winter wheat is the second largest food crop in China. It is important to obtain reliable winter wheat acreage to guarantee the food security for the most populous country in the world. This paper focuses on assessing the feasibility of in-season winter wheat mapping and investigating potential classification improvement by using SAR (Synthetic Aperture Radar) images, optical images, and the integration of both types of data in urban agricultural regions with complex planting structures in Southern China. Both SAR (Sentinel-1A) and optical (Landsat-8) data were acquired, and classification using different combinations of Sentinel-1A-derived information and optical images was performed using a support vector machine (SVM) and a random forest (RF) method. The interference coherence and texture images were obtained and used to assess the effect of adding them to the backscatter intensity images on the classification accuracy. The results showed that the use of four Sentinel-1A images acquired before the jointing period of winter wheat can provide satisfactory winter wheat classification accuracy, with an F1 measure of 87.89%. The combination of SAR and optical images for winter wheat mapping achieved the best F1 measure–up to 98.06%. The SVM was superior to RF in terms of the overall accuracy and the kappa coefficient, and was faster than RF, while the RF classifier was slightly better than SVM in terms of the F1 measure. In addition, the classification accuracy can be effectively improved by adding the texture and coherence images to the backscatter intensity data. PMID:28587066

  10. Quantitative evaluation of variations in rule-based classifications of land cover in urban neighbourhoods using WorldView-2 imagery.

    PubMed

    Belgiu, Mariana; Dr Guţ, Lucian; Strobl, Josef

    2014-01-01

    The increasing availability of high resolution imagery has triggered the need for automated image analysis techniques, with reduced human intervention and reproducible analysis procedures. The knowledge gained in the past might be of use to achieving this goal, if systematically organized into libraries which would guide the image analysis procedure. In this study we aimed at evaluating the variability of digital classifications carried out by three experts who were all assigned the same interpretation task. Besides the three classifications performed by independent operators, we developed an additional rule-based classification that relied on the image classifications best practices found in the literature, and used it as a surrogate for libraries of object characteristics. The results showed statistically significant differences among all operators who classified the same reference imagery. The classifications carried out by the experts achieved satisfactory results when transferred to another area for extracting the same classes of interest, without modification of the developed rules.

  11. Quantitative evaluation of variations in rule-based classifications of land cover in urban neighbourhoods using WorldView-2 imagery

    PubMed Central

    Belgiu, Mariana; Drǎguţ, Lucian; Strobl, Josef

    2014-01-01

    The increasing availability of high resolution imagery has triggered the need for automated image analysis techniques, with reduced human intervention and reproducible analysis procedures. The knowledge gained in the past might be of use to achieving this goal, if systematically organized into libraries which would guide the image analysis procedure. In this study we aimed at evaluating the variability of digital classifications carried out by three experts who were all assigned the same interpretation task. Besides the three classifications performed by independent operators, we developed an additional rule-based classification that relied on the image classifications best practices found in the literature, and used it as a surrogate for libraries of object characteristics. The results showed statistically significant differences among all operators who classified the same reference imagery. The classifications carried out by the experts achieved satisfactory results when transferred to another area for extracting the same classes of interest, without modification of the developed rules. PMID:24623959

  12. Quantitative evaluation of variations in rule-based classifications of land cover in urban neighbourhoods using WorldView-2 imagery

    NASA Astrophysics Data System (ADS)

    Belgiu, Mariana; ǎguţ, Lucian, , Dr; Strobl, Josef

    2014-01-01

    The increasing availability of high resolution imagery has triggered the need for automated image analysis techniques, with reduced human intervention and reproducible analysis procedures. The knowledge gained in the past might be of use to achieving this goal, if systematically organized into libraries which would guide the image analysis procedure. In this study we aimed at evaluating the variability of digital classifications carried out by three experts who were all assigned the same interpretation task. Besides the three classifications performed by independent operators, we developed an additional rule-based classification that relied on the image classifications best practices found in the literature, and used it as a surrogate for libraries of object characteristics. The results showed statistically significant differences among all operators who classified the same reference imagery. The classifications carried out by the experts achieved satisfactory results when transferred to another area for extracting the same classes of interest, without modification of the developed rules.

  13. Abstracting of suspected illegal land use in urban areas using case-based classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Fulong; Wang, Chao; Yang, Chengyun; Zhang, Hong; Wu, Fan; Lin, Wenjuan; Zhang, Bo

    2008-11-01

    This paper proposed a method that uses a case-based classification of remote sensing images and applied this method to abstract the information of suspected illegal land use in urban areas. Because of the discrete cases for imagery classification, the proposed method dealt with the oscillation of spectrum or backscatter within the same land use category, and it not only overcame the deficiency of maximum likelihood classification (the prior probability of land use could not be obtained) but also inherited the advantages of the knowledge-based classification system, such as artificial intelligence and automatic characteristics. Consequently, the proposed method could do the classifying better. Then the researchers used the object-oriented technique for shadow removal in highly dense city zones. With multi-temporal SPOT 5 images whose resolution was 2.5×2.5 meters, the researchers found that the method can abstract suspected illegal land use information in urban areas using post-classification comparison technique.

  14. Object Based Image Analysis Combining High Spatial Resolution Imagery and Laser Point Clouds for Urban Land Cover

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high resolution imagery and Lidar ancillary data for classification of urban land cover.

  15. Automated analysis and classification of melanocytic tumor on skin whole slide images.

    PubMed

    Xu, Hongming; Lu, Cheng; Berendt, Richard; Jha, Naresh; Mandal, Mrinal

    2018-06-01

    This paper presents a computer-aided technique for automated analysis and classification of melanocytic tumor on skin whole slide biopsy images. The proposed technique consists of four main modules. First, skin epidermis and dermis regions are segmented by a multi-resolution framework. Next, epidermis analysis is performed, where a set of epidermis features reflecting nuclear morphologies and spatial distributions is computed. In parallel with epidermis analysis, dermis analysis is also performed, where dermal cell nuclei are segmented and a set of textural and cytological features are computed. Finally, the skin melanocytic image is classified into different categories such as melanoma, nevus or normal tissue by using a multi-class support vector machine (mSVM) with extracted epidermis and dermis features. Experimental results on 66 skin whole slide images indicate that the proposed technique achieves more than 95% classification accuracy, which suggests that the technique has the potential to be used for assisting pathologists on skin biopsy image analysis and classification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Measuring the effect of inter-study variability on estimating prediction error.

    PubMed

    Ma, Shuyi; Sung, Jaeyun; Magis, Andrew T; Wang, Yuliang; Geman, Donald; Price, Nathan D

    2014-01-01

    The biomarker discovery field is replete with molecular signatures that have not translated into the clinic despite ostensibly promising performance in predicting disease phenotypes. One widely cited reason is lack of classification consistency, largely due to failure to maintain performance from study to study. This failure is widely attributed to variability in data collected for the same phenotype among disparate studies, due to technical factors unrelated to phenotypes (e.g., laboratory settings resulting in "batch-effects") and non-phenotype-associated biological variation in the underlying populations. These sources of variability persist in new data collection technologies. Here we quantify the impact of these combined "study-effects" on a disease signature's predictive performance by comparing two types of validation methods: ordinary randomized cross-validation (RCV), which extracts random subsets of samples for testing, and inter-study validation (ISV), which excludes an entire study for testing. Whereas RCV hardwires an assumption of training and testing on identically distributed data, this key property is lost in ISV, yielding systematic decreases in performance estimates relative to RCV. Measuring the RCV-ISV difference as a function of number of studies quantifies influence of study-effects on performance. As a case study, we gathered publicly available gene expression data from 1,470 microarray samples of 6 lung phenotypes from 26 independent experimental studies and 769 RNA-seq samples of 2 lung phenotypes from 4 independent studies. We find that the RCV-ISV performance discrepancy is greater in phenotypes with few studies, and that the ISV performance converges toward RCV performance as data from additional studies are incorporated into classification. We show that by examining how fast ISV performance approaches RCV as the number of studies is increased, one can estimate when "sufficient" diversity has been achieved for learning a molecular signature likely to translate without significant loss of accuracy to new clinical settings.

  17. Multi-Feature Classification of Multi-Sensor Satellite Imagery Based on Dual-Polarimetric Sentinel-1A, Landsat-8 OLI, and Hyperion Images for Urban Land-Cover Classification

    PubMed Central

    Pan, Jianjun

    2018-01-01

    This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively. PMID:29382073

  18. Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Abd-Elrahman, Amr

    2018-05-01

    Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework.

  19. Toward automatic phenotyping of retinal images from genetically determined mono- and dizygotic twins using amplitude modulation-frequency modulation methods

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Davis, B.; Murray, V.; Pattichis, M.; Barriga, S.; Russell, S.

    2010-03-01

    This paper presents an image processing technique for automatically categorize age-related macular degeneration (AMD) phenotypes from retinal images. Ultimately, an automated approach will be much more precise and consistent in phenotyping of retinal diseases, such as AMD. We have applied the automated phenotyping to retina images from a cohort of mono- and dizygotic twins. The application of this technology will allow one to perform more quantitative studies that will lead to a better understanding of the genetic and environmental factors associated with diseases such as AMD. A method for classifying retinal images based on features derived from the application of amplitude-modulation frequency-modulation (AM-FM) methods is presented. Retinal images from identical and fraternal twins who presented with AMD were processed to determine whether AM-FM could be used to differentiate between the two types of twins. Results of the automatic classifier agreed with the findings of other researchers in explaining the variation of the disease between the related twins. AM-FM features classified 72% of the twins correctly. Visual grading found that genetics could explain between 46% and 71% of the variance.

  20. Mid-level image representations for real-time heart view plane classification of echocardiograms.

    PubMed

    Penatti, Otávio A B; Werneck, Rafael de O; de Almeida, Waldir R; Stein, Bernardo V; Pazinato, Daniel V; Mendes Júnior, Pedro R; Torres, Ricardo da S; Rocha, Anderson

    2015-11-01

    In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes. The results show that our approach is effective and efficient for the target problem, making it suitable for use in real-time setups. The proposed representations are also robust to different image transformations, e.g., downsampling, noise filtering, and different machine learning classifiers, keeping classification accuracy above 90%. Feature extraction can be performed in 30 fps or 60 fps in some cases. This paper also includes an in-depth review of the literature in the area of automatic echocardiogram view classification giving the reader a through comprehension of this field of study. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    PubMed

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  2. An Evaluation of Feature Learning Methods for High Resolution Image Classification

    NASA Astrophysics Data System (ADS)

    Tokarczyk, P.; Montoya, J.; Schindler, K.

    2012-07-01

    Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA) and deep belief networks (DBN). We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.

  3. Purification of Training Samples Based on Spectral Feature and Superpixel Segmentation

    NASA Astrophysics Data System (ADS)

    Guan, X.; Qi, W.; He, J.; Wen, Q.; Chen, T.; Wang, Z.

    2018-04-01

    Remote sensing image classification is an effective way to extract information from large volumes of high-spatial resolution remote sensing images. Generally, supervised image classification relies on abundant and high-precision training data, which is often manually interpreted by human experts to provide ground truth for training and evaluating the performance of the classifier. Remote sensing enterprises accumulated lots of manually interpreted products from early lower-spatial resolution remote sensing images by executing their routine research and business programs. However, these manually interpreted products may not match the very high resolution (VHR) image properly because of different dates or spatial resolution of both data, thus, hindering suitability of manually interpreted products in training classification models, or small coverage area of these manually interpreted products. We also face similar problems in our laboratory in 21st Century Aerospace Technology Co. Ltd (short for 21AT). In this work, we propose a method to purify the interpreted product to match newly available VHRI data and provide the best training data for supervised image classifiers in VHR image classification. And results indicate that our proposed method can efficiently purify the input data for future machine learning use.

  4. Multiple Sparse Representations Classification

    PubMed Central

    Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik

    2015-01-01

    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and sparsity level. PMID:26177106

  5. Learning discriminative features from RGB-D images for gender and ethnicity identification

    NASA Astrophysics Data System (ADS)

    Azzakhnini, Safaa; Ballihi, Lahoucine; Aboutajdine, Driss

    2016-11-01

    The development of sophisticated sensor technologies gave rise to an interesting variety of data. With the appearance of affordable devices, such as the Microsoft Kinect, depth-maps and three-dimensional data became easily accessible. This attracted many computer vision researchers seeking to exploit this information in classification and recognition tasks. In this work, the problem of face classification in the context of RGB images and depth information (RGB-D images) is addressed. The purpose of this paper is to study and compare some popular techniques for gender recognition and ethnicity classification to understand how much depth data can improve the quality of recognition. Furthermore, we investigate which combination of face descriptors, feature selection methods, and learning techniques is best suited to better exploit RGB-D images. The experimental results show that depth data improve the recognition accuracy for gender and ethnicity classification applications in many use cases.

  6. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    NASA Astrophysics Data System (ADS)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  7. Alexnet Feature Extraction and Multi-Kernel Learning for Objectoriented Classification

    NASA Astrophysics Data System (ADS)

    Ding, L.; Li, H.; Hu, C.; Zhang, W.; Wang, S.

    2018-04-01

    In view of the fact that the deep convolutional neural network has stronger ability of feature learning and feature expression, an exploratory research is done on feature extraction and classification for high resolution remote sensing images. Taking the Google image with 0.3 meter spatial resolution in Ludian area of Yunnan Province as an example, the image segmentation object was taken as the basic unit, and the pre-trained AlexNet deep convolution neural network model was used for feature extraction. And the spectral features, AlexNet features and GLCM texture features are combined with multi-kernel learning and SVM classifier, finally the classification results were compared and analyzed. The results show that the deep convolution neural network can extract more accurate remote sensing image features, and significantly improve the overall accuracy of classification, and provide a reference value for earthquake disaster investigation and remote sensing disaster evaluation.

  8. DISCRIMINATION OF GRANITOIDS AND MINERALIZED GRANITOIDS IN THE MIDYAN REGION, NORTHWESTERN ARABIAN SHIELD, SAUDI ARABIA, BY LANDSAT MSS DATA-ANALYSIS.

    USGS Publications Warehouse

    Davis, Philip A.; Grolier, Maurice J.

    1984-01-01

    Landsat multispectral scanner (MSS) band and band-ratio databases of two scenes covering the Midyan region of northwestern Saudi Arabia were examined quantitatively and qualitatively to determine which databases best discriminate the geologic units of this semi-arid and arid region. Unsupervised, linear-discriminant cluster-analysis was performed on these two band-ratio combinations and on the MSS bands for both scenes. The results for granitoid-rock discrimination indicated that the classification images using the MSS bands are superior to the band-ratio classification images for two reasons, discussed in the paper. Yet, the effects of topography and material type (including desert varnish) on the MSS-band data produced ambiguities in the MSS-band classification results. However, these ambiguities were clarified by using a simulated natural-color image in conjunction with the MSS-band classification image.

  9. Documentation of procedures for textural/spatial pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.; Bryant, W. F.

    1976-01-01

    A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.

  10. Visual attention based bag-of-words model for image classification

    NASA Astrophysics Data System (ADS)

    Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che

    2014-04-01

    Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.

  11. A Comparison of Local Variance, Fractal Dimension, and Moran's I as Aids to Multispectral Image Classification

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.

    2004-01-01

    The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.

  12. Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.

    2009-02-01

    A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.

  13. Classification-Based Spatial Error Concealment for Visual Communications

    NASA Astrophysics Data System (ADS)

    Chen, Meng; Zheng, Yefeng; Wu, Min

    2006-12-01

    In an error-prone transmission environment, error concealment is an effective technique to reconstruct the damaged visual content. Due to large variations of image characteristics, different concealment approaches are necessary to accommodate the different nature of the lost image content. In this paper, we address this issue and propose using classification to integrate the state-of-the-art error concealment techniques. The proposed approach takes advantage of multiple concealment algorithms and adaptively selects the suitable algorithm for each damaged image area. With growing awareness that the design of sender and receiver systems should be jointly considered for efficient and reliable multimedia communications, we proposed a set of classification-based block concealment schemes, including receiver-side classification, sender-side attachment, and sender-side embedding. Our experimental results provide extensive performance comparisons and demonstrate that the proposed classification-based error concealment approaches outperform the conventional approaches.

  14. Boosting CNN performance for lung texture classification using connected filtering

    NASA Astrophysics Data System (ADS)

    Tarando, Sebastián. Roberto; Fetita, Catalin; Kim, Young-Wouk; Cho, Hyoun; Brillet, Pierre-Yves

    2018-02-01

    Infiltrative lung diseases describe a large group of irreversible lung disorders requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. This paper presents an original image pre-processing framework based on locally connected filtering applied in multiresolution, which helps improving the learning process and boost the performance of CNN for lung texture classification. By removing the dense vascular network from images used by the CNN for lung classification, locally connected filters provide a better discrimination between different lung patterns and help regularizing the classification output. The approach was tested in a preliminary evaluation on a 10 patient database of various lung pathologies, showing an increase of 10% in true positive rate (on average for all the cases) with respect to the state of the art cascade of CNNs for this task.

  15. Surgical anatomy of the hypoglossal nerve: A new classification system for selective upper airway stimulation.

    PubMed

    Heiser, Clemens; Knopf, Andreas; Hofauer, Benedikt

    2017-12-01

    Selective upper airway stimulation (UAS) has shown effectiveness in treating patients with obstructive sleep apnea (OSA). The terminating branches of the hypoglossal nerve show a wide complexity, requiring careful discernment of a functional breakpoint between branches for inclusion and exclusion from the stimulation cuff electrode. The purpose of this study was to describe and categorize the topographic phenotypes of these branches. Thirty patients who received an implant with selective UAS from July 2015 to June 2016 were included. All implantations were recorded using a microscope and resultant tongue motions were captured perioperatively for comparison. Eight different variations of the branches were encountered and described, both in a tabular numeric fashion and in pictorial schema. The examinations showed the complex phenotypic surgical anatomy of the hypoglossal nerve. A schematic classification system has been developed to help surgeons identify the optimal location for cuff placement in UAS. © 2017 Wiley Periodicals, Inc.

  16. Spectral and spatial resolution analysis of multi sensor satellite data for coral reef mapping: Tioman Island, Malaysia

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet; Kabiri, Keivan

    2012-07-01

    This paper describes an assessment of coral reef mapping using multi sensor satellite images such as Landsat ETM, SPOT and IKONOS images for Tioman Island, Malaysia. The study area is known to be one of the best Islands in South East Asia for its unique collection of diversified coral reefs and serves host to thousands of tourists every year. For the coral reef identification, classification and analysis, Landsat ETM, SPOT and IKONOS images were collected processed and classified using hierarchical classification schemes. At first, Decision tree classification method was implemented to separate three main land cover classes i.e. water, rural and vegetation and then maximum likelihood supervised classification method was used to classify these main classes. The accuracy of the classification result is evaluated by a separated test sample set, which is selected based on the fieldwork survey and view interpretation from IKONOS image. Few types of ancillary data in used are: (a) DGPS ground control points; (b) Water quality parameters measured by Hydrolab DS4a; (c) Sea-bed substrates spectrum measured by Unispec and; (d) Landcover observation photos along Tioman island coastal area. The overall accuracy of the final classification result obtained was 92.25% with the kappa coefficient is 0.8940. Key words: Coral reef, Multi-spectral Segmentation, Pixel-Based Classification, Decision Tree, Tioman Island

  17. Computer classification of remotely sensed multispectral image data by extraction and classification of homogeneous objects

    NASA Technical Reports Server (NTRS)

    Kettig, R. L.

    1975-01-01

    A method of classification of digitized multispectral images is developed and experimentally evaluated on actual earth resources data collected by aircraft and satellite. The method is designed to exploit the characteristic dependence between adjacent states of nature that is neglected by the more conventional simple-symmetric decision rule. Thus contextual information is incorporated into the classification scheme. The principle reason for doing this is to improve the accuracy of the classification. For general types of dependence this would generally require more computation per resolution element than the simple-symmetric classifier. But when the dependence occurs in the form of redundance, the elements can be classified collectively, in groups, therby reducing the number of classifications required.

  18. Classification of earth terrain using polarimetric synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Lim, H. H.; Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Shin, R. T.; Van Zyl, J. J.

    1989-01-01

    Supervised and unsupervised classification techniques are developed and used to classify the earth terrain components from SAR polarimetric images of San Francisco Bay and Traverse City, Michigan. The supervised techniques include the Bayes classifiers, normalized polarimetric classification, and simple feature classification using discriminates such as the absolute and normalized magnitude response of individual receiver channel returns and the phase difference between receiver channels. An algorithm is developed as an unsupervised technique which classifies terrain elements based on the relationship between the orientation angle and the handedness of the transmitting and receiving polariation states. It is found that supervised classification produces the best results when accurate classifier training data are used, while unsupervised classification may be applied when training data are not available.

  19. A Study of Hand Back Skin Texture Patterns for Personal Identification and Gender Classification

    PubMed Central

    Xie, Jin; Zhang, Lei; You, Jane; Zhang, David; Qu, Xiaofeng

    2012-01-01

    Human hand back skin texture (HBST) is often consistent for a person and distinctive from person to person. In this paper, we study the HBST pattern recognition problem with applications to personal identification and gender classification. A specially designed system is developed to capture HBST images, and an HBST image database was established, which consists of 1,920 images from 80 persons (160 hands). An efficient texton learning based method is then presented to classify the HBST patterns. First, textons are learned in the space of filter bank responses from a set of training images using the l1 -minimization based sparse representation (SR) technique. Then, under the SR framework, we represent the feature vector at each pixel over the learned dictionary to construct a representation coefficient histogram. Finally, the coefficient histogram is used as skin texture feature for classification. Experiments on personal identification and gender classification are performed by using the established HBST database. The results show that HBST can be used to assist human identification and gender classification. PMID:23012512

  20. Kernel Principal Component Analysis for dimensionality reduction in fMRI-based diagnosis of ADHD.

    PubMed

    Sidhu, Gagan S; Asgarian, Nasimeh; Greiner, Russell; Brown, Matthew R G

    2012-01-01

    This study explored various feature extraction methods for use in automated diagnosis of Attention-Deficit Hyperactivity Disorder (ADHD) from functional Magnetic Resonance Image (fMRI) data. Each participant's data consisted of a resting state fMRI scan as well as phenotypic data (age, gender, handedness, IQ, and site of scanning) from the ADHD-200 dataset. We used machine learning techniques to produce support vector machine (SVM) classifiers that attempted to differentiate between (1) all ADHD patients vs. healthy controls and (2) ADHD combined (ADHD-c) type vs. ADHD inattentive (ADHD-i) type vs. controls. In different tests, we used only the phenotypic data, only the imaging data, or else both the phenotypic and imaging data. For feature extraction on fMRI data, we tested the Fast Fourier Transform (FFT), different variants of Principal Component Analysis (PCA), and combinations of FFT and PCA. PCA variants included PCA over time (PCA-t), PCA over space and time (PCA-st), and kernelized PCA (kPCA-st). Baseline chance accuracy was 64.2% produced by guessing healthy control (the majority class) for all participants. Using only phenotypic data produced 72.9% accuracy on two class diagnosis and 66.8% on three class diagnosis. Diagnosis using only imaging data did not perform as well as phenotypic-only approaches. Using both phenotypic and imaging data with combined FFT and kPCA-st feature extraction yielded accuracies of 76.0% on two class diagnosis and 68.6% on three class diagnosis-better than phenotypic-only approaches. Our results demonstrate the potential of using FFT and kPCA-st with resting-state fMRI data as well as phenotypic data for automated diagnosis of ADHD. These results are encouraging given known challenges of learning ADHD diagnostic classifiers using the ADHD-200 dataset (see Brown et al., 2012).

  1. Multi-Site Diagnostic Classification of Schizophrenia Using Discriminant Deep Learning with Functional Connectivity MRI.

    PubMed

    Zeng, Ling-Li; Wang, Huaning; Hu, Panpan; Yang, Bo; Pu, Weidan; Shen, Hui; Chen, Xingui; Liu, Zhening; Yin, Hong; Tan, Qingrong; Wang, Kai; Hu, Dewen

    2018-04-01

    A lack of a sufficiently large sample at single sites causes poor generalizability in automatic diagnosis classification of heterogeneous psychiatric disorders such as schizophrenia based on brain imaging scans. Advanced deep learning methods may be capable of learning subtle hidden patterns from high dimensional imaging data, overcome potential site-related variation, and achieve reproducible cross-site classification. However, deep learning-based cross-site transfer classification, despite less imaging site-specificity and more generalizability of diagnostic models, has not been investigated in schizophrenia. A large multi-site functional MRI sample (n = 734, including 357 schizophrenic patients from seven imaging resources) was collected, and a deep discriminant autoencoder network, aimed at learning imaging site-shared functional connectivity features, was developed to discriminate schizophrenic individuals from healthy controls. Accuracies of approximately 85·0% and 81·0% were obtained in multi-site pooling classification and leave-site-out transfer classification, respectively. The learned functional connectivity features revealed dysregulation of the cortical-striatal-cerebellar circuit in schizophrenia, and the most discriminating functional connections were primarily located within and across the default, salience, and control networks. The findings imply that dysfunctional integration of the cortical-striatal-cerebellar circuit across the default, salience, and control networks may play an important role in the "disconnectivity" model underlying the pathophysiology of schizophrenia. The proposed discriminant deep learning method may be capable of learning reliable connectome patterns and help in understanding the pathophysiology and achieving accurate prediction of schizophrenia across multiple independent imaging sites. Copyright © 2018 German Center for Neurodegenerative Diseases (DZNE). Published by Elsevier B.V. All rights reserved.

  2. Hyperspectral Imaging Analysis for the Classification of Soil Types and the Determination of Soil Total Nitrogen

    PubMed Central

    Jia, Shengyao; Li, Hongyang; Wang, Yanjie; Tong, Renyuan; Li, Qing

    2017-01-01

    Soil is an important environment for crop growth. Quick and accurately access to soil nutrient content information is a prerequisite for scientific fertilization. In this work, hyperspectral imaging (HSI) technology was applied for the classification of soil types and the measurement of soil total nitrogen (TN) content. A total of 183 soil samples collected from Shangyu City (People’s Republic of China), were scanned by a near-infrared hyperspectral imaging system with a wavelength range of 874–1734 nm. The soil samples belonged to three major soil types typical of this area, including paddy soil, red soil and seashore saline soil. The successive projections algorithm (SPA) method was utilized to select effective wavelengths from the full spectrum. Pattern texture features (energy, contrast, homogeneity and entropy) were extracted from the gray-scale images at the effective wavelengths. The support vector machines (SVM) and partial least squares regression (PLSR) methods were used to establish classification and prediction models, respectively. The results showed that by using the combined data sets of effective wavelengths and texture features for modelling an optimal correct classification rate of 91.8%. could be achieved. The soil samples were first classified, then the local models were established for soil TN according to soil types, which achieved better prediction results than the general models. The overall results indicated that hyperspectral imaging technology could be used for soil type classification and soil TN determination, and data fusion combining spectral and image texture information showed advantages for the classification of soil types. PMID:28974005

  3. Time Series of Images to Improve Tree Species Classification

    NASA Astrophysics Data System (ADS)

    Miyoshi, G. T.; Imai, N. N.; de Moraes, M. V. A.; Tommaselli, A. M. G.; Näsi, R.

    2017-10-01

    Tree species classification provides valuable information to forest monitoring and management. The high floristic variation of the tree species appears as a challenging issue in the tree species classification because the vegetation characteristics changes according to the season. To help to monitor this complex environment, the imaging spectroscopy has been largely applied since the development of miniaturized sensors attached to Unmanned Aerial Vehicles (UAV). Considering the seasonal changes in forests and the higher spectral and spatial resolution acquired with sensors attached to UAV, we present the use of time series of images to classify four tree species. The study area is an Atlantic Forest area located in the western part of São Paulo State. Images were acquired in August 2015 and August 2016, generating three data sets of images: only with the image spectra of 2015; only with the image spectra of 2016; with the layer stacking of images from 2015 and 2016. Four tree species were classified using Spectral angle mapper (SAM), Spectral information divergence (SID) and Random Forest (RF). The results showed that SAM and SID caused an overfitting of the data whereas RF showed better results and the use of the layer stacking improved the classification achieving a kappa coefficient of 18.26 %.

  4. Histopathological Image Classification using Discriminative Feature-oriented Dictionary Learning

    PubMed Central

    Vu, Tiep Huu; Mousavi, Hojjat Seyed; Monga, Vishal; Rao, Ganesh; Rao, UK Arvind

    2016-01-01

    In histopathological image analysis, feature extraction for classification is a challenging task due to the diversity of histology features suitable for each problem as well as presence of rich geometrical structures. In this paper, we propose an automatic feature discovery framework via learning class-specific dictionaries and present a low-complexity method for classification and disease grading in histopathology. Essentially, our Discriminative Feature-oriented Dictionary Learning (DFDL) method learns class-specific dictionaries such that under a sparsity constraint, the learned dictionaries allow representing a new image sample parsimoniously via the dictionary corresponding to the class identity of the sample. At the same time, the dictionary is designed to be poorly capable of representing samples from other classes. Experiments on three challenging real-world image databases: 1) histopathological images of intraductal breast lesions, 2) mammalian kidney, lung and spleen images provided by the Animal Diagnostics Lab (ADL) at Pennsylvania State University, and 3) brain tumor images from The Cancer Genome Atlas (TCGA) database, reveal the merits of our proposal over state-of-the-art alternatives. Moreover, we demonstrate that DFDL exhibits a more graceful decay in classification accuracy against the number of training images which is highly desirable in practice where generous training is often not available. PMID:26513781

  5. A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture

    NASA Astrophysics Data System (ADS)

    Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.

    2017-12-01

    This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.

  6. Intelligent Interfaces for Mining Large-Scale RNAi-HCS Image Databases

    PubMed Central

    Lin, Chen; Mak, Wayne; Hong, Pengyu; Sepp, Katharine; Perrimon, Norbert

    2010-01-01

    Recently, High-content screening (HCS) has been combined with RNA interference (RNAi) to become an essential image-based high-throughput method for studying genes and biological networks through RNAi-induced cellular phenotype analyses. However, a genome-wide RNAi-HCS screen typically generates tens of thousands of images, most of which remain uncategorized due to the inadequacies of existing HCS image analysis tools. Until now, it still requires highly trained scientists to browse a prohibitively large RNAi-HCS image database and produce only a handful of qualitative results regarding cellular morphological phenotypes. For this reason we have developed intelligent interfaces to facilitate the application of the HCS technology in biomedical research. Our new interfaces empower biologists with computational power not only to effectively and efficiently explore large-scale RNAi-HCS image databases, but also to apply their knowledge and experience to interactive mining of cellular phenotypes using Content-Based Image Retrieval (CBIR) with Relevance Feedback (RF) techniques. PMID:21278820

  7. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  8. Automated 3D Phenotype Analysis Using Data Mining

    PubMed Central

    Plyusnin, Ilya; Evans, Alistair R.; Karme, Aleksis; Gionis, Aristides; Jernvall, Jukka

    2008-01-01

    The ability to analyze and classify three-dimensional (3D) biological morphology has lagged behind the analysis of other biological data types such as gene sequences. Here, we introduce the techniques of data mining to the study of 3D biological shapes to bring the analyses of phenomes closer to the efficiency of studying genomes. We compiled five training sets of highly variable morphologies of mammalian teeth from the MorphoBrowser database. Samples were labeled either by dietary class or by conventional dental types (e.g. carnassial, selenodont). We automatically extracted a multitude of topological attributes using Geographic Information Systems (GIS)-like procedures that were then used in several combinations of feature selection schemes and probabilistic classification models to build and optimize classifiers for predicting the labels of the training sets. In terms of classification accuracy, computational time and size of the feature sets used, non-repeated best-first search combined with 1-nearest neighbor classifier was the best approach. However, several other classification models combined with the same searching scheme proved practical. The current study represents a first step in the automatic analysis of 3D phenotypes, which will be increasingly valuable with the future increase in 3D morphology and phenomics databases. PMID:18320060

  9. Is Preoperative Biochemical Testing for Pheochromocytoma Necessary for All Adrenal Incidentalomas?

    PubMed Central

    Jun, Joo Hyun; Ahn, Hyun Joo; Lee, Sangmin M.; Kim, Jie Ae; Park, Byung Kwan; Kim, Jee Soo; Kim, Jung Han

    2015-01-01

    Abstract This study examined whether imaging phenotypes obtained from computed tomography (CT) can replace biochemical tests to exclude pheochromocytoma among adrenal incidentalomas (AIs) in the preoperative setting. We retrospectively reviewed the medical records of all patients (n = 251) who were admitted for operations and underwent adrenal-protocol CT for an incidentally discovered adrenal mass from January 2011 to December 2012. Various imaging phenotypes were assessed for their screening power for pheochromocytoma. Final diagnosis was confirmed by biopsy, biochemical tests, and follow-up CT. Pheochromocytomas showed similar imaging phenotypes as malignancies, but were significantly different from adenomas. Unenhanced attenuation values ≤10 Hounsfield units (HU) showed the highest specificity (97%) for excluding pheochromocytoma as a single phenotype. A combination of size ≤3 cm, unenhanced attenuation values ≤ 10 HU, and absence of suspicious morphology showed 100% specificity for excluding pheochromocytoma. Routine noncontrast CT can be used as a screening tool for pheochromocytoma by combining 3 imaging phenotypes: size ≤3 cm, unenhanced attenuation values ≤10 HU, and absence of suspicious morphology, and may substitute for biochemical testing in the preoperative setting. PMID:26559265

  10. Using deep learning in image hyper spectral segmentation, classification, and detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Su, Zhenyu

    2018-02-01

    Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.

  11. What Images Reveal: a Comparative Study of Science Images between Australian and Taiwanese Junior High School Textbooks

    NASA Astrophysics Data System (ADS)

    Ge, Yun-Ping; Unsworth, Len; Wang, Kuo-Hua; Chang, Huey-Por

    2017-07-01

    From a social semiotic perspective, image designs in science textbooks are inevitably influenced by the sociocultural context in which the books are produced. The learning environments of Australia and Taiwan vary greatly. Drawing on social semiotics and cognitive science, this study compares classificational images in Australian and Taiwanese junior high school science textbooks. Classificational images are important kinds of images, which can represent taxonomic relations among objects as reported by Kress and van Leeuwen (Reading images: the grammar of visual design, 2006). An analysis of the images from sample chapters in Australian and Taiwanese high school science textbooks showed that the majority of the Taiwanese images are covert taxonomies, which represent hierarchical relations implicitly. In contrast, Australian classificational images included diversified designs, but particularly types with a tree structure which depicted overt taxonomies, explicitly representing hierarchical super-ordinate and subordinate relations. Many of the Taiwanese images are reminiscent of the specimen images in eighteenth century science texts representing "what truly is", while more Australian images emphasize structural objectivity. Moreover, Australian images support cognitive functions which facilitate reading comprehension. The relationships between image designs and learning environments are discussed and implications for textbook research and design are addressed.

  12. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    PubMed

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    NASA Astrophysics Data System (ADS)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  14. A new machine classification method applied to human peripheral blood leukocytes

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.; Fitzpatrick, Steven J.; Vitthal, Sanjay; Ladoulis, Charles T.

    1994-01-01

    Human beings judge images by complex mental processes, whereas computing machines extract features. By reducing scaled human judgments and machine extracted features to a common metric space and fitting them by regression, the judgments of human experts rendered on a sample of images may be imposed on an image population to provide automatic classification.

  15. Scanning electron microscope automatic defect classification of process induced defects

    NASA Astrophysics Data System (ADS)

    Wolfe, Scott; McGarvey, Steve

    2017-03-01

    With the integration of high speed Scanning Electron Microscope (SEM) based Automated Defect Redetection (ADR) in both high volume semiconductor manufacturing and Research and Development (R and D), the need for reliable SEM Automated Defect Classification (ADC) has grown tremendously in the past few years. In many high volume manufacturing facilities and R and D operations, defect inspection is performed on EBeam (EB), Bright Field (BF) or Dark Field (DF) defect inspection equipment. A comma separated value (CSV) file is created by both the patterned and non-patterned defect inspection tools. The defect inspection result file contains a list of the inspection anomalies detected during the inspection tools' examination of each structure, or the examination of an entire wafers surface for non-patterned applications. This file is imported into the Defect Review Scanning Electron Microscope (DRSEM). Following the defect inspection result file import, the DRSEM automatically moves the wafer to each defect coordinate and performs ADR. During ADR the DRSEM operates in a reference mode, capturing a SEM image at the exact position of the anomalies coordinates and capturing a SEM image of a reference location in the center of the wafer. A Defect reference image is created based on the Reference image minus the Defect image. The exact coordinates of the defect is calculated based on the calculated defect position and the anomalies stage coordinate calculated when the high magnification SEM defect image is captured. The captured SEM image is processed through either DRSEM ADC binning, exporting to a Yield Analysis System (YAS), or a combination of both. Process Engineers, Yield Analysis Engineers or Failure Analysis Engineers will manually review the captured images to insure that either the YAS defect binning is accurately classifying the defects or that the DRSEM defect binning is accurately classifying the defects. This paper is an exploration of the feasibility of the utilization of a Hitachi RS4000 Defect Review SEM to perform Automatic Defect Classification with the objective of the total automated classification accuracy being greater than human based defect classification binning when the defects do not require multiple process step knowledge for accurate classification. The implementation of DRSEM ADC has the potential to improve the response time between defect detection and defect classification. Faster defect classification will allow for rapid response to yield anomalies that will ultimately reduce the wafer and/or the die yield.

  16. Molecular Classification and Correlates in Colorectal Cancer

    PubMed Central

    Ogino, Shuji; Goel, Ajay

    2008-01-01

    Molecular classification of colorectal cancer is evolving. As our understanding of colorectal carcinogenesis improves, we are incorporating new knowledge into the classification system. In particular, global genomic status [microsatellite instability (MSI) status and chromosomal instability (CIN) status] and epigenomic status [CpG island methylator phenotype (CIMP) status] play a significant role in determining clinical, pathological and biological characteristics of colorectal cancer. In this review, we discuss molecular classification and molecular correlates based on MSI status and CIMP status in colorectal cancer. Studying molecular correlates is important in cancer research because it can 1) provide clues to pathogenesis, 2) propose or support the existence of a new molecular subtype, 3) alert investigators to be aware of potential confounding factors in association studies, and 4) suggest surrogate markers in clinical or research settings. PMID:18165277

  17. High-order distance-based multiview stochastic learning in image classification.

    PubMed

    Yu, Jun; Rui, Yong; Tang, Yuan Yan; Tao, Dacheng

    2014-12-01

    How do we find all images in a larger set of images which have a specific content? Or estimate the position of a specific object relative to the camera? Image classification methods, like support vector machine (supervised) and transductive support vector machine (semi-supervised), are invaluable tools for the applications of content-based image retrieval, pose estimation, and optical character recognition. However, these methods only can handle the images represented by single feature. In many cases, different features (or multiview data) can be obtained, and how to efficiently utilize them is a challenge. It is inappropriate for the traditionally concatenating schema to link features of different views into a long vector. The reason is each view has its specific statistical property and physical interpretation. In this paper, we propose a high-order distance-based multiview stochastic learning (HD-MSL) method for image classification. HD-MSL effectively combines varied features into a unified representation and integrates the labeling information based on a probabilistic framework. In comparison with the existing strategies, our approach adopts the high-order distance obtained from the hypergraph to replace pairwise distance in estimating the probability matrix of data distribution. In addition, the proposed approach can automatically learn a combination coefficient for each view, which plays an important role in utilizing the complementary information of multiview data. An alternative optimization is designed to solve the objective functions of HD-MSL and obtain different views on coefficients and classification scores simultaneously. Experiments on two real world datasets demonstrate the effectiveness of HD-MSL in image classification.

  18. New vascular classification of port-wine stains: improving prediction of Sturge-Weber risk.

    PubMed

    Waelchli, R; Aylett, S E; Robinson, K; Chong, W K; Martinez, A E; Kinsler, V A

    2014-10-01

    Facial port-wine stains (PWSs) are usually isolated findings; however, when associated with cerebral and ocular vascular malformations they form part of the classical triad of Sturge-Weber syndrome (SWS). To evaluate the associations between the phenotype of facial PWS and the diagnosis of SWS in a cohort with a high rate of SWS. Records were reviewed of all 192 children with a facial PWS seen in 2011-13. Adverse outcome measures were clinical (seizures, abnormal neurodevelopment, glaucoma) and radiological [abnormal magnetic resonance imaging (MRI)], modelled by multivariate logistic regression. The best predictor of adverse outcomes was a PWS involving any part of the forehead, delineated at its inferior border by a line joining the outer canthus of the eye to the top of the ear, and including the upper eyelid. This involves all three divisions of the trigeminal nerve, but corresponds well to the embryonic vascular development of the face. Bilateral distribution was not an independently significant phenotypic feature. Abnormal MRI was a better predictor of all clinical adverse outcome measures than PWS distribution; however, for practical reasons guidelines based on clinical phenotype are proposed. Facial PWS distribution appears to follow the embryonic vasculature of the face, rather than the trigeminal nerve. We propose that children with a PWS on any part of the 'forehead' should have an urgent ophthalmology review and a brain MRI. A prospective study has been established to test the validity of these guidelines. © The Authors. British Journal of Dermatology published by John Wiley & Sons Ltd on behalf of British Association of Dermatologists.

  19. Segmentation of bone and soft tissue regions in digital radiographic images of extremities

    NASA Astrophysics Data System (ADS)

    Pakin, S. Kubilay; Gaborski, Roger S.; Barski, Lori L.; Foos, David H.; Parker, Kevin J.

    2001-07-01

    This paper presents an algorithm for segmentation of computed radiography (CR) images of extremities into bone and soft tissue regions. The algorithm is a region-based one in which the regions are constructed using a growing procedure with two different statistical tests. Following the growing process, tissue classification procedure is employed. The purpose of the classification is to label each region as either bone or soft tissue. This binary classification goal is achieved by using a voting procedure that consists of clustering of regions in each neighborhood system into two classes. The voting procedure provides a crucial compromise between local and global analysis of the image, which is necessary due to strong exposure variations seen on the imaging plate. Also, the existence of regions whose size is large enough such that exposure variations can be observed through them makes it necessary to use overlapping blocks during the classification. After the classification step, resulting bone and soft tissue regions are refined by fitting a 2nd order surface to each tissue, and reevaluating the label of each region according to the distance between the region and surfaces. The performance of the algorithm is tested on a variety of extremity images using manually segmented images as gold standard. The experiments showed that our algorithm provided a bone boundary with an average area overlap of 90% compared to the gold standard.

  20. Classification of high-resolution multispectral satellite remote sensing images using extended morphological attribute profiles and independent component analysis

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei

    2017-07-01

    In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.

  1. Gender classification under extended operating conditions

    NASA Astrophysics Data System (ADS)

    Rude, Howard N.; Rizki, Mateen

    2014-06-01

    Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.

  2. Spectral-spatial classification using tensor modeling for cancer detection with hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Lu, Guolan; Halig, Luma; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei

    2014-03-01

    As an emerging technology, hyperspectral imaging (HSI) combines both the chemical specificity of spectroscopy and the spatial resolution of imaging, which may provide a non-invasive tool for cancer detection and diagnosis. Early detection of malignant lesions could improve both survival and quality of life of cancer patients. In this paper, we introduce a tensor-based computation and modeling framework for the analysis of hyperspectral images to detect head and neck cancer. The proposed classification method can distinguish between malignant tissue and healthy tissue with an average sensitivity of 96.97% and an average specificity of 91.42% in tumor-bearing mice. The hyperspectral imaging and classification technology has been demonstrated in animal models and can have many potential applications in cancer research and management.

  3. A software tool for automatic classification and segmentation of 2D/3D medical images

    NASA Astrophysics Data System (ADS)

    Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur

    2013-02-01

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.

  4. Integrating remote sensing and terrain data in forest fire modeling

    NASA Astrophysics Data System (ADS)

    Medler, Michael Johns

    Forest fire policies are changing. Managers now face conflicting imperatives to re-establish pre-suppression fire regimes, while simultaneously preventing resource destruction. They must, therefore, understand the spatial patterns of fires. Geographers can facilitate this understanding by developing new techniques for mapping fire behavior. This dissertation develops such techniques for mapping recent fires and using these maps to calibrate models of potential fire hazards. In so doing, it features techniques that strive to address the inherent complexity of modeling the combinations of variables found in most ecological systems. Image processing techniques were used to stratify the elements of terrain, slope, elevation, and aspect. These stratification images were used to assure sample placement considered the role of terrain in fire behavior. Examination of multiple stratification images indicated samples were placed representatively across a controlled range of scales. The incorporation of terrain data also improved preliminary fire hazard classification accuracy by 40%, compared with remotely sensed data alone. A Kauth-Thomas transformation (KT) of pre-fire and post-fire Thematic Mapper (TM) remotely sensed data produced brightness, greenness, and wetness images. Image subtraction indicated fire induced change in brightness, greenness, and wetness. Field data guided a fuzzy classification of these change images. Because fuzzy classification can characterize a continuum of a phenomena where discrete classification may produce artificial borders, fuzzy classification was found to offer a range of fire severity information unavailable with discrete classification. These mapped fire patterns were used to calibrate a model of fire hazards for the entire mountain range. Pre-fire TM, and a digital elevation model produced a set of co-registered images. Training statistics were developed from 30 polygons associated with the previously mapped fire severity. Fuzzy classifications of potential burn patterns were produced from these images. Observed field data values were displayed over the hazard imagery to indicate the effectiveness of the model. Areas that burned without suppression during maximum fire severity are predicted best. Areas with widely spaced trees and grassy understory appear to be misrepresented, perhaps as a consequence of inaccuracies in the initial fire mapping.

  5. Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Li, X.; Xiao, W.

    2018-05-01

    The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the random sampling method is used to conduct the field investigation, achieve an overall accuracy of 90.31 %, and the Kappa coefficient is 0.88. The classification method based on decision tree threshold values and rule set developed by the repository, outperforms the results obtained from the traditional methodology. Our decision tree repository and rule set based object-oriented classification technique was an effective method for producing comparable and consistency wetlands data set.

  6. Invertebrate Iridoviruses: A Glance over the Last Decade

    PubMed Central

    Özcan, Orhan; Ilter-Akulke, Ayca Zeynep; Scully, Erin D.; Özgen, Arzu

    2018-01-01

    Members of the family Iridoviridae (iridovirids) are large dsDNA viruses that infect both invertebrate and vertebrate ectotherms and whose symptoms range in severity from minor reductions in host fitness to systemic disease and large-scale mortality. Several characteristics have been useful for classifying iridoviruses; however, novel strains are continuously being discovered and, in many cases, reliable classification has been challenging. Further impeding classification, invertebrate iridoviruses (IIVs) can occasionally infect vertebrates; thus, host range is often not a useful criterion for classification. In this review, we discuss the current classification of iridovirids, focusing on genomic and structural features that distinguish vertebrate and invertebrate iridovirids and viral factors linked to host interactions in IIV6 (Invertebrate iridescent virus 6). In addition, we show for the first time how complete genome sequences of viral isolates can be leveraged to improve classification of new iridovirid isolates and resolve ambiguous relations. Improved classification of the iridoviruses may facilitate the identification of genus-specific virulence factors linked with diverse host phenotypes and host interactions. PMID:29601483

  7. Invertebrate Iridoviruses: A Glance over the Last Decade.

    PubMed

    İnce, İkbal Agah; Özcan, Orhan; Ilter-Akulke, Ayca Zeynep; Scully, Erin D; Özgen, Arzu

    2018-03-30

    Members of the family Iridoviridae (iridovirids) are large dsDNA viruses that infect both invertebrate and vertebrate ectotherms and whose symptoms range in severity from minor reductions in host fitness to systemic disease and large-scale mortality. Several characteristics have been useful for classifying iridoviruses; however, novel strains are continuously being discovered and, in many cases, reliable classification has been challenging. Further impeding classification, invertebrate iridoviruses (IIVs) can occasionally infect vertebrates; thus, host range is often not a useful criterion for classification. In this review, we discuss the current classification of iridovirids, focusing on genomic and structural features that distinguish vertebrate and invertebrate iridovirids and viral factors linked to host interactions in IIV6 (Invertebrate iridescent virus 6). In addition, we show for the first time how complete genome sequences of viral isolates can be leveraged to improve classification of new iridovirid isolates and resolve ambiguous relations. Improved classification of the iridoviruses may facilitate the identification of genus-specific virulence factors linked with diverse host phenotypes and host interactions.

  8. Biased visualization of hypoperfused tissue by computed tomography due to short imaging duration: improved classification by image down-sampling and vascular models.

    PubMed

    Mikkelsen, Irene Klærke; Jones, P Simon; Ribe, Lars Riisgaard; Alawneh, Josef; Puig, Josep; Bekke, Susanne Lise; Tietze, Anna; Gillard, Jonathan H; Warburton, Elisabeth A; Pedraza, Salva; Baron, Jean-Claude; Østergaard, Leif; Mouridsen, Kim

    2015-07-01

    Lesion detection in acute stroke by computed-tomography perfusion (CTP) can be affected by incomplete bolus coverage in veins and hypoperfused tissue, so-called bolus truncation (BT), and low contrast-to-noise ratio (CNR). We examined the BT-frequency and hypothesized that image down-sampling and a vascular model (VM) for perfusion calculation would improve normo- and hypoperfused tissue classification. CTP datasets from 40 acute stroke patients were retrospectively analysed for BT. In 16 patients with hypoperfused tissue but no BT, repeated 2-by-2 image down-sampling and uniform filtering was performed, comparing CNR to perfusion-MRI levels and tissue classification to that of unprocessed data. By simulating reduced scan duration, the minimum scan-duration at which estimated lesion volumes came within 10% of their true volume was compared for VM and state-of-the-art algorithms. BT in veins and hypoperfused tissue was observed in 9/40 (22.5%) and 17/40 patients (42.5%), respectively. Down-sampling to 128 × 128 resolution yielded CNR comparable to MR data and improved tissue classification (p = 0.0069). VM reduced minimum scan duration, providing reliable maps of cerebral blood flow and mean transit time: 5 s (p = 0.03) and 7 s (p < 0.0001), respectively). BT is not uncommon in stroke CTP with 40-s scan duration. Applying image down-sampling and VM improve tissue classification. • Too-short imaging duration is common in clinical acute stroke CTP imaging. • The consequence is impaired identification of hypoperfused tissue in acute stroke patients. • The vascular model is less sensitive than current algorithms to imaging duration. • Noise reduction by image down-sampling improves identification of hypoperfused tissue by CTP.

  9. Classification of breast MRI lesions using small-size training sets: comparison of deep learning approaches

    NASA Astrophysics Data System (ADS)

    Amit, Guy; Ben-Ari, Rami; Hadad, Omer; Monovich, Einat; Granot, Noa; Hashoul, Sharbell

    2017-03-01

    Diagnostic interpretation of breast MRI studies requires meticulous work and a high level of expertise. Computerized algorithms can assist radiologists by automatically characterizing the detected lesions. Deep learning approaches have shown promising results in natural image classification, but their applicability to medical imaging is limited by the shortage of large annotated training sets. In this work, we address automatic classification of breast MRI lesions using two different deep learning approaches. We propose a novel image representation for dynamic contrast enhanced (DCE) breast MRI lesions, which combines the morphological and kinetics information in a single multi-channel image. We compare two classification approaches for discriminating between benign and malignant lesions: training a designated convolutional neural network and using a pre-trained deep network to extract features for a shallow classifier. The domain-specific trained network provided higher classification accuracy, compared to the pre-trained model, with an area under the ROC curve of 0.91 versus 0.81, and an accuracy of 0.83 versus 0.71. Similar accuracy was achieved in classifying benign lesions, malignant lesions, and normal tissue images. The trained network was able to improve accuracy by using the multi-channel image representation, and was more robust to reductions in the size of the training set. A small-size convolutional neural network can learn to accurately classify findings in medical images using only a few hundred images from a few dozen patients. With sufficient data augmentation, such a network can be trained to outperform a pre-trained out-of-domain classifier. Developing domain-specific deep-learning models for medical imaging can facilitate technological advancements in computer-aided diagnosis.

  10. A Global Covariance Descriptor for Nuclear Atypia Scoring in Breast Histopathology Images.

    PubMed

    Khan, Adnan Mujahid; Sirinukunwattana, Korsuk; Rajpoot, Nasir

    2015-09-01

    Nuclear atypia scoring is a diagnostic measure commonly used to assess tumor grade of various cancers, including breast cancer. It provides a quantitative measure of deviation in visual appearance of cell nuclei from those in normal epithelial cells. In this paper, we present a novel image-level descriptor for nuclear atypia scoring in breast cancer histopathology images. The method is based on the region covariance descriptor that has recently become a popular method in various computer vision applications. The descriptor in its original form is not suitable for classification of histopathology images as cancerous histopathology images tend to possess diversely heterogeneous regions in a single field of view. Our proposed image-level descriptor, which we term as the geodesic mean of region covariance descriptors, possesses all the attractive properties of covariance descriptors lending itself to tractable geodesic-distance-based k-nearest neighbor classification using efficient kernels. The experimental results suggest that the proposed image descriptor yields high classification accuracy compared to a variety of widely used image-level descriptors.

  11. Image Classification Workflow Using Machine Learning Methods

    NASA Astrophysics Data System (ADS)

    Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.

    2016-12-01

    Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.

  12. PI2GIS: processing image to geographical information systems, a learning tool for QGIS

    NASA Astrophysics Data System (ADS)

    Correia, R.; Teodoro, A.; Duarte, L.

    2017-10-01

    To perform an accurate interpretation of remote sensing images, it is necessary to extract information using different image processing techniques. Nowadays, it became usual to use image processing plugins to add new capabilities/functionalities integrated in Geographical Information System (GIS) software. The aim of this work was to develop an open source application to automatically process and classify remote sensing images from a set of satellite input data. The application was integrated in a GIS software (QGIS), automating several image processing steps. The use of QGIS for this purpose is justified since it is easy and quick to develop new plugins, using Python language. This plugin is inspired in the Semi-Automatic Classification Plugin (SCP) developed by Luca Congedo. SCP allows the supervised classification of remote sensing images, the calculation of vegetation indices such as NDVI (Normalized Difference Vegetation Index) and EVI (Enhanced Vegetation Index) and other image processing operations. When analysing SCP, it was realized that a set of operations, that are very useful in teaching classes of remote sensing and image processing tasks, were lacking, such as the visualization of histograms, the application of filters, different image corrections, unsupervised classification and several environmental indices computation. The new set of operations included in the PI2GIS plugin can be divided into three groups: pre-processing, processing, and classification procedures. The application was tested consider an image from Landsat 8 OLI from a North area of Portugal.

  13. Integration of adaptive guided filtering, deep feature learning, and edge-detection techniques for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing

    2017-11-01

    The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.

  14. a Rough Set Decision Tree Based Mlp-Cnn for Very High Resolution Remotely Sensed Image Classification

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.

    2017-09-01

    Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  15. Multi-level discriminative dictionary learning with application to large scale image classification.

    PubMed

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  16. Evaluating performance of biomedical image retrieval systems – an overview of the medical image retrieval task at ImageCLEF 2004–2013

    PubMed Central

    Kalpathy-Cramer, Jayashree; de Herrera, Alba García Seco; Demner-Fushman, Dina; Antani, Sameer; Bedrick, Steven; Müller, Henning

    2014-01-01

    Medical image retrieval and classification have been extremely active research topics over the past 15 years. With the ImageCLEF benchmark in medical image retrieval and classification a standard test bed was created that allows researchers to compare their approaches and ideas on increasingly large and varied data sets including generated ground truth. This article describes the lessons learned in ten evaluations campaigns. A detailed analysis of the data also highlights the value of the resources created. PMID:24746250

  17. Deep machine learning provides state-of-the-art performance in image-based plant phenotyping.

    PubMed

    Pound, Michael P; Atkinson, Jonathan A; Townsend, Alexandra J; Wilson, Michael H; Griffiths, Marcus; Jackson, Aaron S; Bulat, Adrian; Tzimiropoulos, Georgios; Wells, Darren M; Murchie, Erik H; Pridmore, Tony P; French, Andrew P

    2017-10-01

    In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets. © The Authors 2017. Published by Oxford University Press.

  18. Conventional and hyperspectral time-series imaging of maize lines widely used in field trials

    PubMed Central

    Liang, Zhikai; Pandey, Piyush; Stoerger, Vincent; Xu, Yuhang; Qiu, Yumou; Ge, Yufeng

    2018-01-01

    Abstract Background Maize (Zea mays ssp. mays) is 1 of 3 crops, along with rice and wheat, responsible for more than one-half of all calories consumed around the world. Increasing the yield and stress tolerance of these crops is essential to meet the growing need for food. The cost and speed of plant phenotyping are currently the largest constraints on plant breeding efforts. Datasets linking new types of high-throughput phenotyping data collected from plants to the performance of the same genotypes under agronomic conditions across a wide range of environments are essential for developing new statistical approaches and computer vision–based tools. Findings A set of maize inbreds—primarily recently off patent lines—were phenotyped using a high-throughput platform at University of Nebraska-Lincoln. These lines have been previously subjected to high-density genotyping and scored for a core set of 13 phenotypes in field trials across 13 North American states in 2 years by the Genomes 2 Fields Consortium. A total of 485 GB of image data including RGB, hyperspectral, fluorescence, and thermal infrared photos has been released. Conclusions Correlations between image-based measurements and manual measurements demonstrated the feasibility of quantifying variation in plant architecture using image data. However, naive approaches to measuring traits such as biomass can introduce nonrandom measurement errors confounded with genotype variation. Analysis of hyperspectral image data demonstrated unique signatures from stem tissue. Integrating heritable phenotypes from high-throughput phenotyping data with field data from different environments can reveal previously unknown factors that influence yield plasticity. PMID:29186425

  19. Conventional and hyperspectral time-series imaging of maize lines widely used in field trials.

    PubMed

    Liang, Zhikai; Pandey, Piyush; Stoerger, Vincent; Xu, Yuhang; Qiu, Yumou; Ge, Yufeng; Schnable, James C

    2018-02-01

    Maize (Zea mays ssp. mays) is 1 of 3 crops, along with rice and wheat, responsible for more than one-half of all calories consumed around the world. Increasing the yield and stress tolerance of these crops is essential to meet the growing need for food. The cost and speed of plant phenotyping are currently the largest constraints on plant breeding efforts. Datasets linking new types of high-throughput phenotyping data collected from plants to the performance of the same genotypes under agronomic conditions across a wide range of environments are essential for developing new statistical approaches and computer vision-based tools. A set of maize inbreds-primarily recently off patent lines-were phenotyped using a high-throughput platform at University of Nebraska-Lincoln. These lines have been previously subjected to high-density genotyping and scored for a core set of 13 phenotypes in field trials across 13 North American states in 2 years by the Genomes 2 Fields Consortium. A total of 485 GB of image data including RGB, hyperspectral, fluorescence, and thermal infrared photos has been released. Correlations between image-based measurements and manual measurements demonstrated the feasibility of quantifying variation in plant architecture using image data. However, naive approaches to measuring traits such as biomass can introduce nonrandom measurement errors confounded with genotype variation. Analysis of hyperspectral image data demonstrated unique signatures from stem tissue. Integrating heritable phenotypes from high-throughput phenotyping data with field data from different environments can reveal previously unknown factors that influence yield plasticity. © The Authors 2017. Published by Oxford University Press.

  20. Evaluation of different distortion correction methods and interpolation techniques for an automated classification of celiac disease☆

    PubMed Central

    Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.

    2013-01-01

    Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585

Top